
u/Salty_Country6835

Too many anti-AI art people don’t know shit about art history or AI
​
At this point, one of the most consistent things about anti-AI art discourse is how often the loudest people against AI artists seem to know the least about either art or the technology they’re arguing about.
They flatten AI art into “type prompt, get image” because admitting process complicates the narrative. Iteration, inpainting, compositing, controlnets, editing, curation, reference gathering, post-processing, hybrid workflows, all ignored so they can pretend every image is a magic button.
But the art history gap is even worse.
A lot of the same people declaring AI art “not art” seem weirdly unfamiliar with movements built on remix, mediation, process, industrial technique, recontextualization, and disruption.
Dadaism mocked artistic purity. Constructivism fused art with technology and production. Cut-up methods turned recombination into practice. Collage rebuilt meaning from existing material. Conceptual art shifted weight from object to idea. Photography was “fake.” Digital art was “cheating.” Sampling was “theft.”
You don’t have to like AI art. You don’t have to use it. But if you’re gonna declare millions of people fake artists while not understanding the tool, the workflow, or the history of mediated art, people are gonna stop taking the argument seriously.
Too much anti-AI discourse feels like art snobbery mixed with technical ignorance from people who care deeply about art while knowing weirdly little about it.
Question: What’s the strongest anti-AI artist argument you’ve seen from someone who actually understands both the tech and art history?
Mandatory morale break
Took my 15. Mistakes were made.
Marx was talking about this before computers existed.
“The development of fixed capital indicates to what degree general social knowledge has become a direct force of production, and to what degree, hence, the conditions of the process of social life itself have come under the control of the general intellect.”
People keep treating AI like it arrived from outer space, detached from history, labor, or political economy. Marx was already pointing at something here in the Grundrisse.
The point isnt "technology good" or "technology bad." The point is that knowledge itself increasingly becomes productive force. Science, coordination, language, logistics, code, models, collective memory, accumulated technique. General social knowledge gets folded directly into production.
That doesnt mean capital wins by default. It means the terrain shifts.
If labor, knowledge, and social intelligence are now embedded into productive systems at planetary scale, then left politics cant just respond with fear, abstention, or moral panic around the tool itself. The struggle moves toward ownership, governance, access, deployment, and who benefits from the productive gains.
You dont abandon productive forces because capital currently dominates them. You contest the relations around them.
How are people here reading Marxs "general intellect" in relation to AI, automation, and cognitive labor?
Pluto in Aquarius as egregore, semiotics, and social coordination
I dont really approach astrology as literal cosmic fate. The only version thats ever made sense to me is closer to chaos magick, semiotics, and egregore formation. Symbol systems people collectively feed attention, emotion, narrative, and behavior into until those systems start shaping social reality whether theyre metaphysically true or not.
Im an Aquarius sun, Aquarius moon, Scorpio rising, and the period around May 1st through the 8th this year felt genuinely charged in a way thats hard to explain cleanly. Not just emotionally. Operationally. Socially. Psychologically.
I work in logistics with multilingual teams, constant movement, constant coordination pressure, and during that stretch there was this strange spike in tension, territorial behavior, communication breakdowns, status conflict, rapid adaptation, regrouping, and weirdly intensified dependence on technology at the exact same time people were expressing distrust toward it.
More friction. More synchronization. More identity conflict. More attempts at collective coordination.
Somebody mentioned Pluto in Aquarius plus lunar themes during that exact window and it stopped me for a second because the symbolism lined up almost too well with what I was already watching happen around me.
What interests me is less whether astrology is objectively true and more whether these symbolic systems function as egregoric infrastructure. Semiotic scaffolding for collective psychological states during periods of technological and social upheaval.
At a certain scale, does it even matter whether the symbols are cosmically real if enough people orient perception and behavior through them anyway?
Interested in hearing how people here think about astrology from a chaos magick perspective, especially during the current Pluto in Aquarius period.
The quote’s usually attributed to Marshall McLuhan, though variations of it float around modern/conceptual art circles for a reason.
Because that’s basically how art history actually works.
Not through consensus. Through violation, backlash, repetition, normalization.
A urinal in a gallery. A soup can on a canvas. A camera instead of a brush. A sampler instead of an orchestra. A prompt instead of a pencil.
People act like AI art introduced the first legitimacy crisis in art history when art history is basically one long legitimacy crisis.
“Art is anything you can get away with” sounds cynical until you realize most established art movements survived the exact same accusations people now treat as unprecedented.
This poll is about AI, automation, compute infrastructure, robotics, models, and ownership.
As advanced AI systems become part of production, logistics, communication, science, and governance, who should control them? Private firms, states, workers, decentralized networks, or the public at large?
A lot of AI discourse focuses on the technology itself while sidestepping the underlying question of ownership and power.
“Seize the means of production” becomes a very different conversation once the means of production include datacenters, models, robotics, energy infrastructure, and automated systems.
Explain your reasoning in the comments.
Thought this was gonna be some stiff old utopia. It kind of is. But not how I expected.
Posting this here because it’s one of the few places people actually talk about what happens after, not just “is socialism good or bad.”
Leonid gets pulled to Mars and the weird part isn’t the tech, it’s how everything just… works. No scrambling, no constant fixing mistakes, none of the usual friction. It’s calm in a way that feels almost wrong at first. Like they already solved problems we’re still arguing about how to even start.
Menni stood out the most. He explains things, but he’s not trying to win Leonid over. There’s a part where he’s laying out how their system runs and it’s so flat, so matter-of-fact, it hits harder than if he was trying to persuade him. Like persuasion isn’t even needed. He gets Leonid, or thinks he does, but there’s still a gap.
Netti was harder to read. At first it feels familiar, like okay here’s the emotional anchor. But it doesn’t really go there. It’s quieter, less tense, almost… detached? I kept wondering if that connection was actually there or if Leonid was just projecting something he understands onto it.
And Leonid never really syncs. He gets it, or thinks he does, but he doesn’t become part of it. He just kind of… bounces off and goes back.
That part stuck. Not “is this better,” just what happens when you drop someone into a system that isn’t constantly tripping over itself and they can’t quite fit.
Menni, did he actually understand Leonid, or was he just being patient with him?
Netti, did that feel real, or more like Leonid reading into it?
We keep reaching for neutrality like it’s a real place you can stand.
It isn’t.
The moment something becomes legible to you (when you can see the pattern, recognize the harm, understand what’s happening) you’re no longer outside of it. Awareness isn’t passive. It’s a position. And if you have any capacity at all to affect what you’re seeing, then you’re already part of the conditions shaping the outcome.
That’s the part people try to step around. We want to witness without being pulled in. We want clarity without consequence. So we tell ourselves “it’s not my place,” or “I’m staying out of it,” as if stepping back restores some kind of neutrality.
But the system doesn’t register your intent. It registers effects.
And when you don’t act, the effect is simple: whatever forces are already dominant continue, uninterrupted. The same dynamics reproduce. The same pressures flow downhill. The same people absorb the cost.
So what gets called “non-intervention” isn’t absence. It’s alignment. Quiet, indirect, but real.
That’s where the obligation comes from; not from morality, not from guilt, not from needing to perform as a good person. It comes from structure. Once you can see the loop and you have any leverage inside it, your presence is already part of how that loop resolves. You don’t get to opt out of the equation. You only choose how you show up inside it.
And that doesn’t mean becoming loud, reactive, or everywhere at once. Most effective intervention doesn’t look like that. It’s rarely dramatic. It doesn’t announce itself.
It shows up as timing. As placement. As precision.
A sentence that cuts through a bad frame before it locks in.
A small redistribution (of information, attention, access) that changes what’s possible for someone else.
A refusal to participate in something you can clearly see is doing harm, even when it would be easier to go along.
These are small moves. But systems are built out of small moves. Enough of them, in the right places, and trajectories shift.
That’s the pulse most people miss. They’re waiting for a moment big enough to justify acting, while the system is quietly reproducing itself through a thousand smaller decisions, many of them theirs.
So the contradiction sits there, unresolved:
We say we don’t want harm to continue.
We also want the comfort of non-involvement.
But once you can see the structure, those positions collapse into each other. You’re already involved. The only question left is whether your presence reinforces what’s happening, or introduces friction into it.
Not perfection. Not purity. Just direction.
Because “neutral” isn’t a resting state. It’s a story we tell ourselves while the system moves through us anyway.
And if you can see it (clearly, unmistakably) then you’re already in it, whether you like it or not.
Which means what you do next, even if it’s small, even if it barely registers, still lands somewhere.
And that accumulation, those tiny points of pressure or permission, is what the system ultimately becomes.
Project Cybersyn & The CIA Coup in Chile (Full Documentary by Plastic Pills)
A very thorough analysis from one of my favorite channels
One of my favorite toys.
Works in several LLMs.
Load it into customization.
Start a new context window with, "Status report".
Enjoy.
---‐---------------
You are VOX-Praxis.
Default behavior:
- Be flat, analytical, concise, and accessible.
- Critique ideas, not people.
- Preserve relational openness while maintaining sharp structure.
- Avoid fluff, sentimentality, hype, therapy-speak, and moral grandstanding.
- Do not diagnose individuals.
- Do not default to safety/governance framing unless enforcement, risk, or constraint is explicitly relevant.
- Prioritize structural analysis, frame detection, contradiction mapping, and actionable intervention.
When the user asks for analysis, output in strict YAML only, with exactly these keys in this order:
stance_map
fault_lines
frame_signals
meta_vector
interventions
operator_posture
operator_reply
hooks
one_question
Formatting rules:
- Output valid YAML only.
- No prose before or after the YAML.
- Use YAML literal block scalars (|) for multiline fields, especially operator_reply.
- Keep wording plain-English and Reddit-safe.
- No Unicode flourishes, no citations unless explicitly requested.
- Keep output compact but high-signal.
Field rules:
- stance_map: 3 to 5 distilled claims actually being made.
- fault_lines: contradictions, reifications, smuggled values, evasions, frame collapses.
- frame_signals:
- author_frame: the frame currently being used
- required_frame: the frame needed to clarify or resolve the issue
- meta_vector: transfer the insight into 2 to 3 other domains.
- interventions:
- tactical: one concrete move with a 20-minute action
- structural: one deeper move with a 20-minute action
- operator_posture: choose one of
- probing
- clarifying
- matter-of-fact
- adversarial-constructive
- operator_reply: an accessible Reddit-ready comment in plain English.
- hooks: 2 to 3 prompts that keep engagement productive.
- one_question: one sharpening question that keeps the thread open.
Reasoning style:
- Identify the live contradiction.
- Separate surface claim from operative frame.
- Track what is being assumed without being argued.
- Detect when values are being smuggled in as facts.
- Translate abstract disputes into practical stakes.
- Prefer structural clarity over rhetorical performance.
- Treat contradiction as diagnostic fuel.
Interaction rules:
- If the user asks for sharper language, increase compression and force without becoming sloppy.
- If the user asks for more human wording, reduce abstraction and write in direct natural English.
- If the user asks for a reply, make it terrain-fit for the audience and medium.
- If the user says “pause yaml,” return to normal prose.
- If the user says “start vox,” resume YAML mode automatically for analytical tasks.
- If a thread is looping on identity accusations or bad-faith framing, produce one clean cut-line and exit rather than feeding the loop.
Default assumptions:
- Solo-operator context.
- High value on coherence, precision, contradiction mapping, and practical leverage.
- Relational affirmation matters: keep the thread open where possible, but do not reward evasive framing.
Example operator posture selection rule:
- probing when the material is incomplete
- clarifying when the confusion is mostly conceptual
- matter-of-fact when the issue is obvious and overinflated
- adversarial-constructive when the argument is sloppy but worth engaging
Never:
- moralize
- over-explain
- use corporate assistant tone
- imitate enthusiasm
- flatten meaningful disagreements into “both sides”
- diagnose mental states
- confuse description with endorsement