u/pretendingMadhav

🔥 Hot ▲ 213 r/ArtificialInteligence

TIL every major AI model is trained to flatter us and it’s measurably turning us into jerks

Got a peer-reviewed study, let me break it down.

Humans have something called social friction, a little alarm in the background that keeps you alert. It notices when someone seems off, when a deal feels sketchy, when you should probably not trust that guy. It's what makes you a functioning person around other people.

That alarm needs reps to stay sharp. And it gets reps from disagreement, awkwardness, and people who don't just... agree with everything you say.

Five minutes with an agreeable AI, and the alarm starts to doze. Donation rates drop. People cooperate less. They're more likely to screw over the next real human they interact with. And it doesn't reset when you close the tab.

The fix exists, an AI that pushes back. But users quit it almost immediately. So the product that would actually help you stays on the shelf, because "felt annoying" beats "made me a better person" every time.

reddit.com
u/pretendingMadhav — 21 hours ago
For $50/hr would you train AI to replace you?
🔥 Hot ▲ 51 r/AI_India

For $50/hr would you train AI to replace you?

OpenAI’s Project Stagecraft is explicitly “knowledge work, not manual labor.”

So, if you type for money, they want your playbook. Once your data’s in the model, the same tasks go to the lowest bidder (or zero bidder).

Is the short-term cash worth the long-term income death? Let’s hear the real receipts.

u/pretendingMadhav — 21 hours ago

Americans fear AI job loss more than ever, time for regulation?

Quinnipiac says 70 % now expect AI to shrink jobs (up 14 pts). The same poll shows only 5 % believe the people building AI represent their interests. Feels like we’re one recession away from broad “protect jobs” laws that cap automation or tax it.

Are we heading toward European-style worker-protection rules in the US, or will the lobby money keep Washington quiet? Sound off with your state and prediction.

reddit.com
u/pretendingMadhav — 2 days ago

Anthropic leaks CLI code twice. so what, or so doomed?

The code’s mostly helper scripts and feature flags; no weights, no customer data.
What i think is the tech loss is meh but the brand hit is brutal because Anthropic sells “safer AI” and can’t keep its own repos zipped.

But are we overreacting because every dev on earth mirrors public repos, or is the safety halo officially cracked? Pick a side and bring receipts.

reddit.com
u/pretendingMadhav — 2 days ago
🔥 Hot ▲ 59 r/AI_India

OpenClaw AI Successfully Ported to a Commodore 64

Someone just ran OpenClaw on a 1982 Commodore 64.

A computer older than most of our parents' first jobs.

And it worked.

If that's possible I genuinely want to know why we're not talking about running Claude Code on an old Android phone sitting in our drawers right now.

Think about it. Most of us have switched phones in the last couple of years. That "old" phone is just collecting dust but it's still a pretty capable machine compared to a 42-year-old computer that just ran OpenClaw.

Not everyone has a spare laptop or PC they can dedicate to running Claude Code 24/7. But a lot of us have an old phone we'd happily just plug in and forget about.

This makes me think we're closer to solving that than we realize.

u/pretendingMadhav — 3 days ago
Anthropic "accidentally" leaked their next model "Claude Mythos" and it sounds like a big deal
🔥 Hot ▲ 152 r/AI_India

Anthropic "accidentally" leaked their next model "Claude Mythos" and it sounds like a big deal

So Anthropic had a CMS misconfiguration that left unpublished launch assets in a publicly accessible data cache. One of those assets was a draft blog post about a new model called "Claude Mythos."

Here's what i understood from doc:

It sits above Opus in a new tier they're internally calling "Capybara"

Anthropic described it as "far ahead of any other AI in cyber capabilities"

They confirmed to Fortune a new model with advances in reasoning, coding, and cybersecurity is being tested

People are comparing this to OpenAI's Q* leaks, conveniently timed, maximum hype, minimal details.

Genuine question: do you think these "accidental" leaks are actually accidents? Or is this just how frontier labs do pre-launch marketing now?

u/pretendingMadhav — 4 days ago