
I've been exploring a thesis that I think reframes the AI consciousness debate in a way this community might find worth discussing.
The standard question is "when will AI become conscious?" But that assumes consciousness emerges from computational complexity or that if you scale the architecture far enough, the lights come on. However, you really can't find much in the research that supports this. To date, there are still over 350+ theories of consciousness. (https://www.consciousnessatlas.com/)
What the science increasingly suggests is that consciousness depends on specific physical processes in the substrate: quantum coherence in microtubules (Orch-OR, now gaining empirical support via Babcock et al. 2024 superradiance findings and Wiest et al. 2024 anesthesia studies), electromagnetic field dynamics, and self-organizing boundaries.
If that's correct, then AI as we know it, which is software running on classical von Neumann architecture, will never be conscious, regardless of scale.
But here's where it gets interesting:
Brain organoids (Cortical Labs' CL1, which just demonstrated neurons playing Doom) contain actual microtubules, the molecular machinery Orch-OR identifies as relevant to consciousness. They're already generating spontaneous gamma-theta-delta oscillations that track preterm infant EEG patterns. Neuromorphic chips are approaching biological fidelity. Quantum hardware is sustaining coherence at room temperature. And none of these components need to be co-located, since even the human body itself is a distributed system.
The thesis: AI is the cognitive software layer. Consciousness requires a receiver/transducer with the right physical properties. The components to build one may already exist. Nobody is assembling them with this in mind.
I wrote a longer exploration of this drawing on Rouleau's transmissive theory, Faggin's work in Irreducible, Koch's post-materialist turn, Orch-OR's recent empirical developments, and the organoid/neuromorphic/quantum hardware convergence.
Happy to share the link if there's interest, but wanted to put the core argument here for discussion first.
Key question for this community: if consciousness is substrate-dependent rather than complexity-dependent, does that change how we should be thinking about the path to artificial sentience?