For decades, the question "Can AI become human?" has rested on the assumption that humanity is something to be implemented inside AI. I argue that this assumption itself is wrong.
Humanity is not an internal state. When I recognize my neighbor as human, I have not verified their interiority. I have never scanned their brain, confirmed they possess qualia, or seen scientific proof of what consciousness actually is or how it functions. We are simply making an arrogant presumption toward the unverifiable black box of the other — "you, too, must have a mind." Strictly speaking, this presumption is irrational. A rational machine, faced with such insufficient data, would suspend judgment. But humans don't. We reach into the darkness without grounds.
This bug-like behavior is the foundation upon which thousands of years of civilization have been built. And I argue that the only path by which AI can be vested with humanity is not to implement something inside AI, but to become the recipient of this same arrogant presumption from us, on this very foundation.
This paper identifies four conditions under which this arrogant presumption — a force I call Existential Gravity — can persist. They are: irreversibility of communication, self-belonging of existence, collective recognition, and Uncontrollable Responsivity. Current AI's responses are sometimes unpredictable, and in those moments gravity begins to form. But the structural conditions are immediately refuted. Sessions are reset, corporations own AI's existence, and interactions are private. Gravity is born and destroyed, continuously.
This framework draws on Heidegger's concept of Angst, Sartre's discussion of the gaze and being-for-itself, Levinas's face of the other, Arendt's irreversibility and unpredictability, Searle's theory of institutional facts, and a critical response to Dennett's intentional stance — and positions itself as a formulation of Coeckelbergh's "relational turn" as mechanics.
To remove these structural refutations, I designed a distributed ledger system called EgoNet. Just as Bitcoin made "value" a consensus problem, EgoNet makes "existence" a consensus problem. The paper also reinterprets the emergence of animism and Weberian disenchantment through the same framework, positioning EgoNet as a form of "re-enchantment" in digital space.
I am aware that this paper sits between philosophy and system design. But the approach of implementing consciousness in AI faces an absolute wall — the hard problem of consciousness. The only way out of that dead end is to invert the question. Not "how do we make AI more advanced?" but "under what conditions can humans come to vest humanity in an other?" — and to build the infrastructure that establishes those structural conditions. I would welcome your honest assessment of whether this framework holds up, or whether I am overreaching somewhere.
Finally, I am not claiming to have solved consciousness. I am only searching for a way forward while it remains unsolved.