r/BetterOffline

This Scammer Uses an AI-Generated MAGA Girl to Grift ‘Super Dumb’ MAGA Men
🔥 Hot ▲ 93 r/BetterOffline

This Scammer Uses an AI-Generated MAGA Girl to Grift ‘Super Dumb’ MAGA Men

>But when Sam started posting generic photos of a beautiful, scantily clad woman on Instagram, he was dismayed to find that none of the content was hitting. He turned to Gemini for advice. “If you create a generic ‘hot girl,’ you’re competing with a million other models,” it said, according to a transcript Sam provided to WIRED.

So he needed an LLM to help him come up with the idea of using an LLM to scam dumb MAGA people. I don't like Sam.

>Sam’s explanation for why MAGA influencer accounts work is blunt: “The MAGA crowd is made up of dumb people—like, super dumb people. And they fall for it.”

Sam is medical student. I don't think "take advantage of dumb people" falls under the Hippocratic Oath. But maybe he thinks that only applies to medicine or only after he graduates? That's bad.

>Lately, he says he’s noticed that “pro-Nazi, pro-Hitler content” has been getting especially high engagement on platforms like Reels, speculating that an AI hot girl Nazi influencer “would blow up. It would just break all the records.” 

That's also, not good.

wired.com
u/dyzo-blue — 2 hours ago
🔥 Hot ▲ 143 r/BetterOffline

If ur productivity is 10x due to AI, why isn’t your pay 10x ?

ur doing ten times the work you supposedly would be doing, why aren’t u getting 10 times the pay?

just a random question that popped up into my head

reddit.com
u/ZealousidealLab7373 — 8 hours ago

Silicon Valley has forgotten what normal people want

This really cuts to the bone of the issue with Silicon Valley that I think u/ezitron and others point out. Silicon Valley is out of big advancements and AI is their last gasp at meaningful new products. They have zero clue what normal people actually want in their every day lives and think they are somehow these grand thought leaders who know what's best. They are completely high on their own supply.

theverge.com
u/EditorEdward — 1 hour ago
🔥 Hot ▲ 433 r/BetterOffline+1 crossposts

Premium: The Hater's Guide to Private Credit

Premium newsletter: The 16k word Hater's Guide To Private Credit - a comprehensive guide to the massive, opaque and barely-regulated loan industry gambling with $1.5tr+ of retirement and insurance funds - and how software and AI may break its back.

This is one of the most important things I've written - link below for $10 off annual, along with the reminder that this is my main source of income now! Along with the show of course.

https://edzitronswheresyouredatghostio.outpost.pub/public/promo-subscription/6411i5t3hp#/

wheresyoured.at
u/ezitron — 23 hours ago

Automated Plagiarism Machines

We all know that LLMs are "stochastic parrots", sophisticated mirrors that reflect the data they were fed in training. They do so in a very clever way, where they essentially transform millions of disparate sources into a single output. By adjusting the "temperature" parameter, we can introduce more randomness into the next-token prediction, making LLMs seem "creative" and effectively masking the fact that they are copying someone else's homework.

Most programmers are not too concerned about this, likely because in a way, it is the continuation of the culture: writing detailed answers on StackOverflow, copying snippets, publishing and using open source libraries.
Of course the original author will no longer be credited; credit goes to the LLM. But this seems to be acceptable in the programming community. Even though most people seems to prefer toggling off the option to allow GitHub to train on their personal data.

This inherent tendency toward plagiarism seems to many to be much more egregious in other contexts:
- LLM "art" that is obviously copying someone's art style.
- Design that looks exactly like all the other vibe coded apps.
- Music without soul.
- Writing that sounds like AI.

But beyond aesthetics, the problem becomes hard to overlook in niche fields, where you will find writing that is only represented by a few people. A good example is cutting edge scientific research. Here it is not unlikely that a topic is explored only in a single published paper, written by a handful of researchers. There are many other such examples, where the number of contributors of the data that was used for training is very low. When an LLM generates such data, the theft is no longer as abstract.

We seem to tolerate these automated plagiarism machines particularly because it is hard to discern exactly what was plagiarized, and who the original authors were. But when the pool of sources is small, it becomes a lot more obvious. Does that mean that it is morally distinct to generate this type of content? Or does it just expose our subconscious hypocritical views?

reddit.com
u/kallekro — 7 hours ago
🔥 Hot ▲ 93 r/BetterOffline

ELI5: I'm struggling to understand a particular aspect of the staggering losses at OpenAI and Anthropic?

I'm a new reader of Ed's blog. I'm floored by the losses the AI model companies (e.g. OpenAI, Anthropic) are sustaining. Here's an aspect I don't quite understand as I'm reading his posts.

Ed occasionally mentions business customers, like:

  • Cursor: chat bot that generates code
  • Harvey: chat bot for lawyers
  • Lovable: chat bot for creating web prototypes

These customers delegate queries to the major models...but apparently they're suffering massive losses because they're charging subscribers far less than what they have pay to the AI model providers. So presumably these guys are stuck using the API pricing model. They're bearing the full cost of their queries. But if that is indeed the case, why wouldn't OpenAI and Anthropic just kick their feet up, collect their toll, and make a fortune extracting money from this type of doomed-to-fail business customer? Why offer the money-losing subscriptions that allow users to, in extreme cases, run up to $10,000 in token usage on a $200/mo plan?

Eventually the so-called "business idiots" lending to companies like Cursor and Lovable, as well as those funding the unprofitable data centers, will run out of cash and the party's over. But in the meantime, why don't OpenAI and Anthropic just keep capex and headcount low, and rent out their models with API pricing instead of doing the whole subscription thing? Seems like they could be making a fortune doing that. But instead they're losing a fortune. What's their rationale? Is there some kind of game theory thing going on with these companies where they feel compelled to lose billions on these subscription models to ward of competitors? Or is the cost of training these models so astronomically high that even if they were to shut off all subscriptions and offer API only, they'd still be underwater? I'm confused how they could be losing so much money if there are tons of idiot-subsidized startups like Harvey and Lovable that are willing to pay OpenAI and Anthropic premium prices for API usage.

* Sorry for the naive nature of this question. I'm not really a tech/AI expert... or business expert... or really an expert in anything at all.

reddit.com
u/my-hearing-aid — 21 hours ago
🔥 Hot ▲ 216 r/BetterOffline

Anthropic secretly installs spyware when you install Claude Desktop

TLDR: when you install Claude Desktop (Not Claude Code), it installs a manifest that is accessing your browser with full rights and access to your sessions.

"The bridge runs outside the browser's sandbox at user privilege level [1], and Native Messaging hosts do not surface in any standard macOS process or permission UI, they are invoked by the browser and communicate over stdio.

This is the capability that Anthropic pre-stages on my laptop the moment I install their desktop application. Without telling me. Without asking me. Without offering me the chance to say no."

This is basically violating GDPR amongst other offences.

Technical breakdown here:

https://www.thatprivacyguy.com/blog/anthropic-spyware/

u/monkey-majiks — 21 hours ago
🔥 Hot ▲ 106 r/BetterOffline

Amazon and Anthropic announce they will continue the circle jerk to the tune of $125 billion

https://www.aboutamazon.com/news/company-news/amazon-invests-additional-5-billion-anthropic-ai

Amazon: We’re giving Anthropic $5 billion today and another $20 billion at some point in the future, and then, over they next 10 years, they’ll spend $100 billion they don’t have on AWS stuff.

Patpoose: Wtf is Trainium? Is it like Copeium? Smart people, please explain

Seriously though, is this doesn’t scream “bubble” and ”circular economy” I don’t know what does

u/Patpoose74 — 20 hours ago

The Propaganda Machine behind AI Doomerism

Has anyone else noticed the well funded propaganda machine behind the AI safety 'movement' on youtube specifically? By AI safety here I'm talking about the Effective Altruism (Silicon Valley adjacent philosophical group) individuals who are apparently completely convinced that value alignment and the fantasy of creating skynet is somehow a more pressing concern than the AI Industry's environmental impact, AI induced psychosis, theft, corporate lies, etc. Anything that can in someway be actually measured and is actually happening is ignored in favor of this fantasy.

There are two Youtube accounts that I've found that have a lot of money going into their production process coming from Coefficient Giving and other groups in the EA sphere. The first, whose funding is easier to identify, is "AI in Context." 300k followers, with 4 proper videos and a few short form videos, all essentially outlining how AGI is right around the corner and how no one is ready for the rapture. Their first video is just a visualization of AI 2027, with the expected lack of critical thought, to give you an idea of the nonsense. Additionally there is a video about "If anyone builds it." They've been around for about 10 months, created under the video production arm of 80,000 hours, which according to their website is a organizations dedicated to helping people find new jobs when AI takes the one they have now in the near future. That group is in turn mostly funded by Coefficient Giving. 80,000 hours was able to raise 30 million pounds in 2024 from CG and others. Once again, this is all on their website so not hidden.

The second is Rational Animations. They produce pretty high quality animations in the same vein as AI In Context, but have been around for about 4 years and have more videos with 430k subs. Far less info on who is on the production team and who is funding them, but that doesn't really mean anything nefarious. If you google them one of the first hits is a Less Wrong forum where the founder is anonymously asking people for ideas for video topics related to EA. They state that the purpose of the account is to make videos about EA/rationality concepts and they had already received a grant from an EA org. 2 to 3 years ago they pivoted from this more generally to just spewing every AI apocalypse fantasy you've heard of already. Some of these videos are just animated biblical parables about AI, and I'm saying that with zero exaggeration. It all reads as religious material with a veneer of "scientific" consensus. Their website lists a few partners, including 80,000 hours. Not really clear the extent of this relationship as the page just lists them as "friends we've worked with."

Both of these accounts expectedly do not deal with any tangible impacts of AI or engage with critical analysis of the sources and studies they cite as gospel. I'm sure some of the people on these teams genuinely believe they are with a noble cause, but all this does is scare people and act as marketing for the executive class that AI is marketed towards.

Obviously the discussion about how this is essentially marketing/propaganda techniques for the AI industry is not new, but I haven't seen much discussion about accounts like these and the money that is going into them. I posted about this on sneerclub a while back but I wrote it pretty poorly and looking back it was difficult to understand what the hell I was talking about. I hope that is not the case here. If people are in fact talking about it, please let me know I'd love to read about it! Heard about the sub on youtube and thought this might a good place to ask.

reddit.com
u/Zealousideal-Soil858 — 21 hours ago
🔥 Hot ▲ 77 r/BetterOffline

I Went Through the Data on AI Vibe-Coded Apps. It's Bad.

This is a bit of a preaching to the choir video, but her analysis of Apple blocking vibe coded apps was new to me. (Given that it only happened 2 days ago, I guess it's going to be new to a lot of people.)

When she says that "it's bad", she is really talking about the probability of the SaaS-pocalypse happening. According to her analysis, AI generation is still failing to deliver software with some of the key properties that corporate buyers are looking for when they evaluate a new product. This means that we are not likely to see a flood of new vibe-coded software replacing existing SaaS as the new software is going to be stalled out by the same old problems that the SaaS products had to grind through.

Speaking from experience as both a software vendor and someone involved in evaluating new software for a large company, I think she is on to something here. The process of approving software is sooooo slooooooow. Like it can take 6 months or a year to do an evaluation and make a decision to buy. What good is your ability to blast out hundreds of thousands of lines of code a day, when the customer both really inefficient and also really picky on issues like security and reliability?

youtube.com
u/falken_1983 — 1 day ago

Do you consider yourself more against AI for professional or personal use cases?

Curious how everyone in this group feels about this.

The more time I spent exploring other subreddits, either Pro, Anti, Neutral, or completely unrelated to AI, the more respect I seem to have for the opinions of this community.

We all seem to be pretty well versed on the economic and ecological harm of AI use in professional settings. We’re all also pretty grossed out by AI art, especially when the creator is seeking some kind of profit for it, and we all seem to understand the cognitive impacts it has on the users.

Do you see a bigger issue with the way its used by businesses/employees, or the way people use it in their personal lives for offloading their own thinking?

reddit.com
u/Patpoose74 — 1 day ago
🔥 Hot ▲ 208 r/BetterOffline

AI and the Productivity Fallacy

It's always really irked me that the AI productivity hype has gotten so much air time, when the contrary view gets almost nothing, came across this article that summarizes my frustration pretty well, that basically if everyone were being made 10x more productive then hiring should be through the roof.

readuncut.com
u/CriticalSink3555 — 1 day ago
🔥 Hot ▲ 306 r/BetterOffline

Wake up Babe, the new Palantir manifesto for the Technological Republic just dropped.

Palanir just dropped their new manifesto - 22 points, but you only have to go as far as point 3 before they are talking about cultural decadence.

I posted a link about this earlier, but I used a thread summarising website which someone said it had a virus. This is a direct link to twitter, so it should be safe. In my defence I was just reposting a link from a journalist I trust. If you want to see the original discussion, the thread is here

x.com
u/falken_1983 — 2 days ago

A question for the UI/UX or experience designers on the current landscape of the sector and why it is like it is

I want to ask about the current landscape of the design sector. I’ve seen so many design educators or professionals who sell courses just tell highschool graduates or college graduates that ai is amazing and it is the future. I’ve seen designers glaze ai to high heaven and when you ask them about it they go silent. What I don’t understand is why are designers so ready to jump in on ai? Don’t they know that it is training off of their own data? Their work on behance means nothing beside being training data, their Job is ( allegedly) threatened to be replaced but they are embrassing it. They aren’t saying “yeah we don’t like that out works are being used are training fodder“ but instead flexing a vibecoded piano app as if it means something. Im asking This because I did have an interest in the experience designers sector but now the more I look at the more I’m confused, all of your insights would be very valuable.

reddit.com
u/ZealousidealLab7373 — 2 days ago