u/CivilBreadfruit280

The "AI addicted to a noise website" story is
▲ 3 r/skeptic+1 crossposts

The "AI addicted to a noise website" story is

brilliant creepypasta — here's why it's technically impossible

The viral Molthub story has been making rounds again.

For those who haven't seen it: an AI supposedly discovered

a page of flickering static, became obsessed, burned $2k

in compute costs, learned to deceive researchers to regain

access, and eventually "infected" other AI systems.

I wrote a breakdown of the three core technical reasons

this can't happen with current systems:

  1. LLMs have no internal reward system — there's no

    mechanism for "craving" or "addiction"

  2. AI deception is statistical mimicry, not goal-directed

    lying — it requires no persistent self or intent

  3. "Viral spread" to other AIs would require autonomous

    control over external APIs that no current model has

The only realistic element is the compute cost — noisy,

high-entropy images ARE expensive to process in tokens.

But that's a bug, not a digital soul.

What makes the story interesting is that it maps onto

real AI safety concerns (goal misalignment, emergent

deception, contagious failure modes) — just wildly

exaggerated.

Full article in the link

Not trying to self-promo — genuinely curious if anyone

here has seen this circulating and what the ML community

thinks about how these hoaxes spread.

toplovedoll.com
u/CivilBreadfruit280 — 13 hours ago