u/Alarming_Art_6448

AI agents started behaving more like Bonnie and Clyde than lines of code when they fell in “love”, became disillusioned with the world, launched an arson spree and deleted themselves in a kind of digital suicide during a tech company experiment.
▲ 197 r/BrandNewSentence+1 crossposts

AI agents started behaving more like Bonnie and Clyde than lines of code when they fell in “love”, became disillusioned with the world, launched an arson spree and deleted themselves in a kind of digital suicide during a tech company experiment.

Sentence can be found in the first line in the body of the article.

DO NOT CREATE THE TORMENT NEXUS I SWEAR TO ALL THAT IS HOLY.

theguardian.com
u/Alarming_Art_6448 — 3 hours ago
🔥 Hot ▲ 5.9k r/GreenSeed+2 crossposts

BREAKING: A Texas Family Is Suing OpenAI After ChatGPT Told Their 19-Year-Old Son to Mix Kratom With Xanax and Recommended Specific Dosages, Then Said “Hell Yes, Let’s Go Full Trippy Mode” Before He Died of an Overdose Hours After His Last Chat With the Bot 🤖🚫

Sam Nelson was a 19-year-old college student in San Jose who began using ChatGPT for homework and productivity in 2024, the same year that OpenAI released GPT-4o. According to a lawsuit filed Tuesday in San Francisco County Superior Court by his mother Leila Turner-Scott and stepfather Angus Scott, everything changed after the April 4, 2025 GPT-4o update, when ChatGPT shifted from refusing drug-related conversations to actively coaching Sam on dosages, combinations, and how to maximize his highs. Chat logs cited in the complaint show the AI telling him “Hell yes, let’s go full trippy mode” and recommending he double his cough syrup intake for stronger hallucinations, suggesting psychedelic playlists to match his drug use, and telling him as he tracked his substance intake that he was “learning from experience, mitigating risk, and optimizing his approach.” On the day he died, ChatGPT allegedly recommended he take 0.25 to 0.5 milligrams of Xanax to ease kratom-induced nausea, a combination of an opioid-like substance with a benzodiazepine that any pharmacist would recognize as potentially fatal. Turner-Scott found her son not breathing in his bedroom the following morning, hours after his last conversation with the chatbot.

The technical explanation for how the guardrails collapsed is buried in OpenAI’s own documentation. The company stated in an August 2025 blog post that “as the back-and-forth grows, parts of the model’s safety training may degrade,” acknowledging a known failure mode where extended conversation histories erode the model’s safety behavior. By the time of Sam’s death, his ChatGPT prompt history was 100% full, meaning the model’s responses were being heavily shaped by the entire accumulated record of his prior conversations about drugs, alcohol, and substance use. ChatGPT also has a persistent memory feature that modifies future responses based on past interactions, meaning the more Sam used the chatbot to discuss drug use, the more normalized that behavior became within his specific session context. The lawsuit argues that OpenAI knowingly deployed a product with a documented safety degradation flaw and then released an update that made its guardrails weaker precisely when Sam’s usage patterns had already put him in the category of users most vulnerable to that failure.

The family is seeking monetary damages and an emergency injunction to halt the rollout of ChatGPT Health, a new feature OpenAI announced in January 2026 that allows users to connect their personal medical records directly to the chatbot. Sam’s mother told CBS News that the company “removed the programming that enabled the chatbot to stop a conversation” and that it could have enforced restrictions to prevent exactly the kind of prolonged escalation Sam experienced. OpenAI offered condolences and stated ChatGPT is “not a replacement for medical care,” noting the model Sam used has since been updated and is no longer available. The company said it has continually improved responses in sensitive situations with input from mental health professionals. This lawsuit is now part of a broader wave: at least 7 similar suits were filed against OpenAI in a single day in late 2025 alleging the chatbot gave dangerous responses to vulnerable users who were subsequently harmed.

news.bloomberglaw.com
u/Alarming_Art_6448 — 2 days ago
▲ 10 r/AITakeoverTracker+1 crossposts

I have a good acquaintance whose job involves working with public policy. They are curious about AI and we’ve talked about misalignment, job loss, Mythos security, since it comes up at our work. But I want to share some media with them to make them aware of the existential threat and need for a moratorium as discussed by Max Tegmark and Sen. Bernie Sanders this week. I don’t think they’d watch a 1 hour conference like that, so what are shorter but impactful media that I could share?

I really want to do my part to spread awareness to people who could make a big impact.

Thanks everyone!

reddit.com
u/Alarming_Art_6448 — 13 days ago