
did a deep dive (extract below) into AI images: italian brainrot memes, the John Pork lore, AI driven folklore, slop, ultra-realistic image generators, ‘deadbots’, our besieged imagination and why nothing feels real anymore.
my argument is that we should see brainrot content in the vein of subversive absurd art like Victorian nonsense literature, Surrealism and the Dada movement. It’s ultra-realistic AI images - and their universal availability - which we should be far more concerned about.
__________________________________
Brainrot as a cultural form and the set of lore practices which emerged around it, is a product of the AI revolution, a barely processed pandemic and the collapse of the post-1945 world order. It is a contradiction: lore is about shared meaning-making and assembling pieces into recognisable narrative shapes, brainrot is about revelling in nonsense. It pokes fun at our current epistemic crisis, where we are losing our grip on reality itself. Chat, is this real? Is this Large Language Model my friend?
Italian brainrot presents the AI generated image as something excessive, nonsensical and deranged. This, at the same time as Silicon Valley companies are spending billions to naturalize AI and weave it imperceptibly into our daily routines. On every app now we are accosted by AI tools that no-one asked for and no-one can remove. Brainrot introduces a glitch in the AI matrix, a violent jolt in the frictionless world. Remember, the half-frog, half-tyre character Boneca Ambalabu is just as real as the photorealistic AI ‘slop’ flooding social media. This ranges from impossibly curvy OF girls to ‘footage’ of racialized ragebait or sympathy-fishing images of crying children and puppies rescued from floods. You’re not meant to think it’s real, just \*feel\* that it could be.
In a taxonomy of AI content, the intentional absurdity of brainrot is its defining feature. It invites interpretation then laughs at our efforts. Brainrot content may be AI generated, Daniele Zinni argues, yet it is ‘anything but statistically probable’ and ‘strange enough to surprise rather than trigger a predictable reaction’. It’s hard to feel that Tralalero Tralala is real, hence the satirical mileage in pretending that he is. Slop is more insidious. It wants something from us. Emotionally coercive slop images leverage our attention for further motives, trading in our prejudices, allegiances and desires. It is the tool of the trade, writes Gunseli Yalcinkaya, for ‘grifters trying to make a quick buck’ and ‘politicians wanting to overwhelm the system with AI cringe edits of themselves as \*Star Wars\* characters’. Slop is the medium of narcissistic wish fulfilment, so it makes sense the current White House cannot resist its lure.
I would argue that it’s not the absurd material produced by AI that we should find repulsive, but anything that we might take, at first glance, as real. In just a few years since we laughed at the nightmarish sketches produced by early image generator DALL-E, we now have tools on our phones, like Google’s Nano Banana Pro, which can conjure entirely convincing synthetic people with the right amount of fingers. This effectively means that photography as means of gathering evidence, documenting real events is finished. Instead we have an exhausting cynicism where every image should now be interrogated with scepticism. Footage which once might have stirred public outrage – of corruption or war crimes – loses its moral authority, dismissed with the wave of a politician’s hand.
It’s not just that AI images undermine the veracity of images, but the troubling new ways they can *simulate* images. Genealogy site MyHeritage offers (for a monthly fee) to ‘bring dead ancestors back to life’ by using deepfake technology to animate their faces in photographs. The company introduced the ability for these images to speak, just a year after assuring they wouldn’t. Grief Tech is an unsurprisingly fast growing market, because who wouldn’t want to reach out to a lost loved one? Researchers at the University of Cambridge have called for greater guardrails, warning how these deadbots may create unhealthy emotional attachments, stifling the mourning process and making the bereaved prone to manipulation. Imagine hearing your dead grandparent tell you that your monthly subscription price is going up. It’s multiple Black Mirror stories at once.
In a sense, AI image generation is the terminal endpoint of the centuries-long Enlightenment project of illumination. This was the idea that everything in darkness should be dragged into the light, the invisible made visible, scientific reason as the means and the end. The principle of empirical observation and transparency underscores everything from democracy and secularism to mass-media and surveillance. Now AI image generation gifts us mortals the ‘God’s eye view’, all-seeing and therefore all-knowing. We can make whole worlds in 7 seconds. ‘There are eyes everywhere. No dark spots left’ once remarked cultural theorist Paul Virilio: ‘what will we dream of when everything is visible? We’ll dream of being blind’.