u/AmorFati01

▲ 2.2k r/BetterOffline+2 crossposts

Coders in 2030 be like:

"Dude, I don't code anymore, I just prompt the AI and hope it works."

u/AmorFati01 — 9 hours ago

Caught between fears of job loss and social stigma, Gen Z’s opinions of AI are hitting new lows.

Far from the stereotype of lazy young people looking for shortcuts, Gen Zers have had some of the loudest and most detailed objections to generative AI use. Their attitudes also reflect a much wider backlash against AI and the tech industry in general, which has recently resulted in a nonpartisan movement against data centers across the country and threatened both CEOs and politicians supportive of Silicon Valley’s AI frenzy.

Meg Aubuchon, a 27-year-old art teacher living in Los Angeles, says their response and that of many of their peers has been to avoid chatbot tools entirely. “It just makes me want to dig my heels into a career where I never have to use AI, even if that’s a career that isn’t going to pay as well,” Aubuchon told The Verge.

Emerging from academia and into the vice grip of an increasingly brutal job market, young people face an impossible contradiction. They are being told, on the one hand, that these tools are going to eliminate millions of jobs, and on the other that they have to use them if they don’t want to fall behind. They’re the first new generation of adults to navigate a world flooded with chatbots and generative AI slop, after having already lost years of their youth to the covid-19 pandemic. And all the while, Silicon Valley’s multitrillion-dollar push for AI adoption is clashing with their fears of its well-documented impacts — on the environment, disinformation, academic integrity, and our social fabric and emotional well-being, to name just a few.

“The part that feels scariest to me is the human impact, because it impacts people on an individual level and how they relate to other people, whether that be their ability to have relationships or just basic communication,” said Aubuchon.

Sharon Freystaetter, 25, went to school for computer science at a young age and spent three years working as a cloud infrastructure engineer at a major Silicon Valley company. But right as AI hype really started to take off, she left the company, citing ethical concerns and anxiety over the environmental impacts of data centers. Now, she has left the tech industry for good, and says she avoids chatbots and disables AI features in applications whenever possible.

“I think everyone in my immediate peer group is not using AI and is actively against it, besides my friends who are in computer science and are essentially mandated to use it,” Freystaetter, who is now a food service worker in New York, told The Verge. “When I came back and started to look around [for tech jobs], suddenly everything was saying ‘You need to use AI to get this job’ in the requirements.”

Fears that chatbots are wrecking critical thinking and social skills are common among many groups of young adults, even as a wide majority of them admit to using chatbot tools regularly. According to a recent Harvard-Gallup study, 74 percent of young adults surveyed in the United States said they use a chatbot at least once a month (another study found more than half of US college students admit to using the tools for their coursework on a weekly basis). At the same time, 79 percent of those surveyed by Gallup “expressed concern that AI makes people lazier,” and 65 percent said that using chatbots “promotes instant gratification, not real understanding” and prevents people from engaging with ideas in a critical or meaningful way.

u/AmorFati01 — 12 days ago

"The 'safeguards' OpenAI pointed to after the attack did not fail; they did not exist."

Seven families — the first wave of dozens, lawyers say — are suing OpenAI, alleging that the company failed to provide Canadian authorities with information that could’ve prevented a horrific school shooting in the rural mining town of Tumbler Ridge, British Columbia, despite having advance knowledge of the shooter’s disturbing conversations with the chatbot.

The lawsuits also claim that OpenAI has misled the public about the steps it says it took to stop the shooter from using ChatGPT to discuss mass violence.

In early February, 18-year-old Jesse Van Rootselaar killed her mother and younger stepbrother before traveling to Tumbler Ridge’s secondary school, where she opened fire on students and teachers using a modified rifle.

Five students, all aged between 12 and 13, and a teacher were murdered. Twenty-seven more people were wounded, some severely. Several parents were forced to identify their children by their clothing because the damage wrought on the kids’ young bodies was so extreme. The shooter died by suicide.

Like millions of other people, Van Rootselaar was a ChatGPT user. In late February, a bombshell Wall Street Journal report revealed that in June 2025, months before the eventual shooting, OpenAI’s automated moderation tools flagged Van Rootselaar’s account for graphic discussions of mass violence. Human reviewers at the company were alarmed by the content, and — convinced that Van Rootselaar’s interactions with ChatGPT represented a credible imminent threat to the lives of others — they urged OpenAI executives to warn Canadian law enforcement.

After a debate that reportedly involved about a dozen staffers, OpenAI leaders chose to say nothing, and moved instead to deactivate Van Rootselaar’s account.

Filed in California, the lawsuits — which describe ChatGPT as a “co-conspirator” in the school massacre — contend that had OpenAI alerted law enforcement, local officials could’ve intervened before it was too late. OpenAI’s inaction, the lawsuits allege, was a business decision spurred by the potential future liability that reporting troubling interactions like Van Rootselaar’s would invite, and how that liability could stand to impact the company’s ongoing momentum toward an IPO.

The plaintiffs include the families of each victim murdered at the school: 13-year-old Ezekiel Schofield; 12-year-old Zoey Benoit; 12-year-old Ticaria “Tiki” Lampert; 12-year-old Abel Mwansa Jr.; 12-year-old Kylie Smith; and 39-year-old education assistant Shannda Aviugana-Durand.

Among the plaintiffs is also the family of Maya Gebala, a 12-year-old who was shot three times in the head and neck. Gebala survived, but with “catastrophic” injuries to her brain and remains in critical condition. (In March, Gebala’s family filed a lawsuit against OpenAI in Canada; this new suit supersedes the family’s initial filing.)

The families are seeking to hold OpenAI “accountable” for “designing a dangerous product, ignoring the warnings of their own safety team, refusing to notify authorities when they knew the Shooter was planning a mass attack, inviting them back onto the platform after deactivating their account,” the lawsuits collectively read, “and choosing profit over the lives of the children of Tumbler Ridge.”

Source: https://futurism.com/artificial-intelligence/openai-school-shooter-tumbler-ridge-lawsuits

u/AmorFati01 — 12 days ago
▲ 132 r/Betzy+1 crossposts

Executive Summary:

  • The Information reports that OpenAI projects that its $20-a-month ChatGPT Plus subscriptions will decrease from 44 Million subscribers in 2025 to a projected 9 million subscribers in 2026.
    • OpenAI projects to make up the difference by increasing its ad-supported ChatGPT Go ($5 or $8-a-month depending on the region) subscriptions from 3 million in 2025 to 112 million in 2026.

Utterly whacky story!

https://www.wheresyoured.at/openai-projects-chatgpt-plus-subscriptions-to-drop-by-80-from-44-million-in-2025-to-9-million-in-2026-made-up-using-cheaper-subscriptions-somehow/

u/AmorFati01 — 15 days ago

https://www.youtube.com/watch?v=EPvXArjXxfA

Crazy, over the past few weeks I have been using AI / Claude code less and less at work and have been ............ writing my own code again. It's wild how many people have been burning through tokens to do the simplest things just to save a few minutes to wait for business to decide what they want next or why we can't release this change yet so we have to wait until we get alignment from some random team/director/vp etc all of which have no idea how or why we need to release now or delay until - you get the point

u/AmorFati01 — 15 days ago