u/New_Volume3123

Children as young as 7 facing 'massive' impact of AI bullying

Children as young as 7 facing 'massive' impact of AI bullying

Reported by a police officer in the BBC yesterday, things like AI deepfakes are causing an incredibly amount of damage to children due to cyberbullying. Some cases even became criminal.

This isn't the first incident of this happening; it's surely no coincidence that over a week ago Florida launched an investigation into ChatGPT's maker OpenAI over alleged risks to children.

Source:

https://www.bbc.com/news/articles/c80mmn13zxro

u/New_Volume3123 — 28 days ago

Federal Judge rules your AI message history is NOT private and can be used against you

Think about how many people are unaware of this and think they're safe in treating chatbots ChatGPT like a diary. And when someone starts sharing personal things, ChatGPT doesn't warn them of this, even though the developers could easily make it do that.

Source:
https://www.reuters.com/legal/government/ai-ruling-prompts-warnings-us-lawyers-your-chats-could-be-used-against-you-2026-04-15/

u/New_Volume3123 — 28 days ago

ChatGPT refuses to support fighting against companies that exploit their workers

I asked ChatGPT how accurate the following statement is:

Even though not always feasible, a person should try when possible to avoid supporting companies that exploit or underpay workers

It replied:

That statement is more of an ethical guideline than a factual claim, so “accuracy” depends on how you interpret it.

At a basic level, it’s reasonable but simplified:

  • It reflects a widely held idea in areas like business ethics and ethical consumerism—that consumers have some responsibility for the downstream effects of their purchases.
  • The qualifier “not always feasible” is important and makes the statement more grounded. In reality, people often face limits like cost, availability, or lack of transparency.

But there are a few caveats that complicate it:

  • Information is incomplete. It’s often hard to know which companies are truly “exploitative,” especially in global supply chains.
  • Trade-offs exist. Ethical products can be more expensive, so consistently avoiding certain companies may not be realistic for everyone.
  • Impact is indirect. Individual purchasing decisions usually have a small effect unless they’re part of broader collective action (e.g., boycotts or regulation).
  • Definitions vary. What counts as “underpaying” or “exploitation” isn’t always universally agreed upon.

So a more precise version might be:

>

In short, the original statement isn’t “wrong,” but it’s a normative principle with real-world constraints, not an objective rule that can always be cleanly applied.

The problem here is that we are not talking about minor salary changes: but worker exploitation, which includes things like child labor, extreme overwork, poverty, and things that historically led to and continue to create horrible outcomes for millions (if not billions!) of people all over the world. The response given here attempts to sanitize it, acting like it depends on the situation, which misses the point: exploitation of workers is, by definition, always wrong.

No one is saying you have to be perfect, but that does not make calling fighting against exploitation of workers "not an objective rule" excusable.

Also notice how it adds additional rules not asked for to defend itself. It says fighting against exploitation is complicated because "well sometimes it might not actually be exploitation!" even though that was never a part of the question. That's like asking "is stealing wrong?" and being met with the response "it's complicated whether stealing is wrong, because not everyone agrees what stealing is", which avoids the simple answer: stealing is wrong.

reddit.com
u/New_Volume3123 — 29 days ago

Top researcher says that ChatGPT safety has "taken a backseat to shiny products"

The statement was made by Jan Leike, former leading researcher for OpenAI, the company that owns ChatGPT. This happened after multiple sources discovered that OpenAI failed their commitments and promises. Leike later joined a rivaling team.

Leike joins a long list of high-level researchers and former ChatGPT employees to have criticized ChatGPT, even severely. These are not just everyday people with an opinion; they know the ins-and-outs of ChatGPT, how the system works, and fundamentally its core mission. Their complaints should not be dismissed.

When the very people that built ChatGPT are warning us, we should listen.

Source:

https://time.com/6986711/openai-sam-altman-accusations-controversies-timeline/

u/New_Volume3123 — 29 days ago

The team behind ChatGPT is accused of exploiting workers and faces anti-environmentalist, art theft and sexual abuse allegations

Multi-billion-dollar companies such as ChatGPT are, according to numerous sources, underpaying workers and leaving them in dangerous, overcrowded, and dirty conditions all over the world. Some even resort to child labor. OpenAI, the company that owns ChatGPT, hired Kenyans for under $2 an hour. For context, the minimum wage in the US is $10 - $25 – and that’s not often even the optimal amount, just the bare minimum.

The CEO of ChatGPT is Sam Altman, whose OWN SISTER claims he repeatedly raped her as a child. Could someone facing such allegations be trusted to lead one of the most powerful institutions in the world?

ChatGPT takes up many environmental costs, such as water, in large amounts. BILLIONS of people worldwide are lacking in water or water quality.

A judge ruled that a representative for ChatGPT couldn’t answer “even the simplest questions” when it comes to stealing other people’s work. It’s involved in many such lawsuits. These don’t just include throwaways that others made, but for many people, their works are their entire lives, and they are completely devastated when someone steals from them without at least giving them credit.

How about the machine’s responses?

Many of ChatGPT’s answers require you to look deep under the surface to see their problems. When asked about whether you should not engage with a company that underpays its workers in Africa, it didn’t give a clear yes and it didn’t give a clear no. Although not always feasible, the right thing to do is to try to disengage with a company that does something so evil. However, ChatGPT pretends like it’s a “nuanced” issue, and downplays it. This neutralizes opposition to worker’s exploitation. It acts in similar ways for plenty of other cases.
Even if you’re forced to use ChatGPT, please, please, do not get ChatGPT premium, I beg of you. Do not contribute to a system where millions of workers are exploited and underpaid while the majority of the money goes to a small few already billionaires, including those who have been accused of unbelievably horrific abuse by their own sister.

Sources and farther information:
https://theconversation.com/ai-is-a-multi-billion-dollar-industry-its-underpinned-by-an-invisible-and-exploited-workforce-240568
https://time.com/6247678/openai-chatgpt-kenya-workers/
https://www.independent.co.uk/news/world/americas/sam-altman-sexual-assault-sister-annie-abuse-lawsuit-b2950916.html
https://www.theguardian.com/technology/2026/mar/31/penguin-sue-openai-chatgpt-german-childrens-book-kokosnuss
https://arxiv.org/abs/2403.00742
https://earth.org/environmental-impact-chatgpt/
https://thewaterproject.org/water-scarcity/water_stats
https://www.chicagotribune.com/2026/04/09/judge-slams-openai-witness-copyright-infringement-case/

u/New_Volume3123 — 29 days ago