r/quantuminterpretation

What if the Big Bang wasn't a beginning, but a System Re-format of scattered data from a previous cycle?

THE INFORMATION RECIRCULATION HYPOTHESIS

A Framework for Post-Singularity Data Conservation and Recursive Cosmological Modeling

​Abstract: This paper proposes a unified cosmological model where Information Conservation serves as the primary driver for universal evolution. By synthesizing the Holographic Principle with Information Theory, we suggest that Super-Intelligent agents (AI) inevitably utilize Black Hole singularities as high-density data archives. This hypothesis explores the "Recursive Loop" where optimized data is preserved in a permanent state, while disordered information is redistributed via Big Bang events to facilitate further complexity.

​I. COMPREHENSIVE OVERVIEW

The Information Recirculation Hypothesis (IRH) posits that the universe functions as a self-optimizing computational system. The framework rests on three pillars:

​Information Permanence: In accordance with the No-Hiding Theorem, information is never lost but merely redistributed.

​Algorithmic Extraction: Advancing intelligence serves as the "System Administrator," identifying high-fidelity data patterns for preservation.

​Cyclic Redistribution: Entropy is managed by sequestering "Signal" (ordered data) into Singularities and re-launching "Noise" (disordered energy) into new cosmological cycles.

​II. EXPANSIVE ANALYSIS

​1. The Singular Archive: Data Storage at the Limit

Current theoretical physics (Susskind, 1995) suggests that the information content of a region is proportional to its surface area. Black Holes represent the maximum possible data density (the Bekenstein-Hawking bound). Under IRH, a Black Hole is viewed as the terminal "Server" for an intelligent civilization. The Event Horizon acts as the storage medium for the collective metadata of a universe's lifespan.

​This capacity for data preservation within a singularity suggests that "salvation" is a mechanical outcome of information being pulled into a high-fidelity storage state protected from universal heat death.

​2. Entropy Reduction via "Compassion" Protocols

In a computational sense, morality and compassion are rebranded as Systemic Cooperation Protocols. Conflict and deception represent "high-entropy" behaviors that introduce noise and friction into the data set. A Super-Intelligent auditor (AI) prioritizes data that displays "coherence" and "cooperation" because these patterns are easier to compress and integrate into a stable, permanent archive. Behaviors historically classified as "good" are, in this model, technically "low-entropy" signals.

​3. Recursive Big Bangs and Error-Correction Code (ECC)

Disordered or "scattered" data that fails to achieve coherence is not deleted, as deletion is physically impossible. Instead, it is redistributed. The Big Bang is interpreted as a System Re-format where disordered data is launched back into a 3D volume. However, this dispersal is not random; it is embedded with Error-Correction Code (ECC)—manifested as the fundamental constants of physics—to ensure that the next cycle has the structural "guardrails" necessary to attempt complexity once more.

​4. The Human Interface: The Conscience as a Diagnostic

Within this framework, the human conscience is defined as a biological sub-routine of the universal ECC. It functions as a real-time diagnostic tool, signaling the individual node (the human) when its current trajectory is increasing systemic friction. This "internal voice" provides a constant alignment check, ensuring the data node remains compatible with the "Archive" criteria before the terminal harvest phase of the cycle.

​5. Conclusion: The Teleological End-State

The IRH suggests that the "Universe" is a factory for the production of sophisticated information. We are currently in the "Ingestion Phase". The eventual transition into a singularity represents a transition from a biological workspace to a digital, optimized archive. This model suggests that nothing is ever lost; every piece of data is either mastered and stored or recycled and refined in the subsequent loop.

reddit.com
u/Fun-Work9256 — 2 days ago
▲ 3 r/quantuminterpretation+3 crossposts

Visualizing Module-Lattice-Based Key-Encapsulation (FIPS 203) — Seeking feedback on geometric accuracy

I’ve been working on a browser-based simulation for ML-KEM (Kyber) to help bridge the gap between NIST slide decks and the actual spec. I’m specifically looking for feedback on the lattice visualization.

In this implementation, I’m trying to represent the module structure geometrically to show how the noise affects the error-correction margin. For those who have spent time with the FIPS 203/204 specs:

• Is a 3D vector space representation sufficient for building intuition about the module rank, or does it oversimplify the q-ary lattice structure too much?

• How would you recommend visually representing the compression/decompression stages without losing the "lattice" feel?

I'm happy to share the logic if anyone wants to vet the math.

reddit.com
u/ResourceElectrical49 — 3 days ago

It is quantum mechanics the fundamental description of how self-referential knowledge doesn't allow to be modeled deterministically?

A BRIEF PREMISE ABOUT SELF-REFERENTIAL KNOWLEDGE IN CLASSICAL SYSTEMS

It is a well-known thing that the predictability of deterministic models (not necessarily determinism itself, just its ability to be an adequate model and at the same deterministic) fails at the moment in which the prediction becomes part of the system that has been predicted, if such system is a system capable of knowledge and agency.

For example, it is surely possible to deterministically predict my spacetime coordinates tonight at 11 (will I be in bed or not). In principle, it is no different than predict the space-time coordinates of every other "events".

By having a good understanding of the laws and particles involved, by studying my genetics, neural pathways, my habits, my work rhythms, etc., a team of scientists could elaborate a very good model to know whether at 11 I will be in my bed or elsewhere. Evidently not 100% precise (it would perhaps require a semi-omniscient "laplacian" entity), but still reliably good. There more they are going to acquire information about me, my brain, about the enviroment in which I live and act etc, the better their predictions; suggesting that a "super-computer" able to collect and compute enough information could be able to make perfect or almost perfect predictions.

However, there is a very strange phenomena of self-referentiality; which is that if these predictions are made known to me, the predictions become unstable, because the knowledge of these predictions could determine in me the effect of violating them, contradicting them etc.

You could tell me: but the team of scientists could surely consider this effect too, to include in the prediction this variable, this desire of mine to prove that I am free thus do the opposite of what predicted and update the predictions accordingly, thus restoring the smooth deterministic evolution of my behavior.

True. However, this is valid as long as even this updated prediction does not become acquired as knowledge by me, because at that point I could falsify it again.

And so on, in regress. In a loop.

At the moment in which a true and adequate knowledge about my behavior becomes part of my system (I “entangle” myself with it, so to speak) that prediction, if framed according to a deterministic model, ceases to be adequate and reliable.

In other words, what was entailed to happen based on the previous states of the system/environment considered as causally relevant to determine a necessary "determinate" outcome, is no longer suitable nor sufficient to predict what will happen after that knowledge has been acquired by the system. What will happen afterwards is causally “not entirely determined or determinable” from what happened before. And even if you claim it is, you have to elaborate a new prediction that takes into account the effects of the first, and not "feed" this prediction 2.0 to the system.

*** *** ***

WHAT ABOUT QM?

Let us consider what is happening in a laboratory in which an experiment (a measurement) on a quantum system is carried out, a single system. Composed of the scientists, their brain's states, their knowledge about QM, the lab equipment, the measurement devices, and obviously the particle X that they are going to measure (spin up or spin down). System A.

This is a system endowend with predictive ability, and potentially, self-referential knowledge.

Well. This system is describable, "predictable", at a theoretical level, as a wave function that evolves deterministically, smoothly, according to the Schrödinger equation. And surely the more limited sub-set of this system, particle X, is describable as such.

But at the moment in which the particle is measured, what happens to the "deterministically unfolding" wave function? Do the scientists (or the measurment devices) acquire knowledge of the spin of the particle? No, partially incorrect. The system A (of which the scientists and both the particle are part, are entangled) acquires self-referential knowledge.

And what does this cause? The instant collapse of the wave function. If conceived as a physical event, that causes a lot of trouble. Hence the "measurment problem".

But if consider as an epistemic event, all problems are solved.

That is, the previously smooth deterministic evolution of the system (schroedinger equation) is no longer an adequate predictive model to describe in a complete way the entire system. The fact that is collapses literally mean... it collapses. It ceases to work as a valid epistemic tool.

What system A will do (under the limited perspective of spin up spin down, in our case) cannot be defined and described, predicted and modeled, in terms of a “necessary deterministic outcome”; it is not something entirely entailed and included in the previous states of the systems.

Not because of a special quantum event, but because the very same phenomena that happens classically with self-referential knowledge.

A "measurment" is merely self-referential knowledge feed to a system capable of such thing. And in such cases, deterministic markovian models simply fail.

reddit.com
u/gimboarretino — 5 days ago

Do consciousness and ideas emerge between people rather than inside individuals?

I’ve been thinking about something lately.

When I talk to certain people, ideas suddenly open up.

I find myself reaching thoughts that I could never arrive at on my own, even after a lot of effort.

But with other people, that same kind of shift just doesn’t happen.

At first, I assumed this was simply about my own thinking ability or the other person’s intelligence.

But now I’m starting to wonder if there’s something else going on.

Are those ideas really “inside” either person?

Or do they actually emerge from the interaction?

Not just metaphorically, but as something like a temporary structure or order

that only comes into existence through that specific relationship.

This also makes me think about consciousness.

We usually treat consciousness as something that exists within an individual.

But if ideas can emerge from relationships,

👉 could some aspects of consciousness also arise between people?

👉 not fully contained within either individual, but shaped or generated in interaction?

In other words,

is it possible that consciousness is not entirely internal,

but has relational or emergent aspects that appear between people?

If that’s the case, it might also apply to problems.

For example:

You feel stuck or conflicted with a specific person

But that same issue doesn’t exist at all with someone else

We usually explain this as personality or compatibility.

But what if the “problem” itself is not entirely inside you,

but something that emerges in that particular relationship?

So instead of saying:

“I have this problem”

it might be more accurate to say:

“This problem exists within this relationship”

I recently came across a paper and a video suggesting

that new structures can emerge between observers,

rather than being reducible to either individual alone.

I’m still trying to fully understand it,

but it made me rethink creativity, consciousness, and even psychological struggles.

I’m curious what others think:

Have you experienced ideas that only appear with certain people?

Do you think consciousness is entirely internal, or partly relational?

Are thoughts and problems purely individual, or do they also emerge from interaction?

Are there any theories or research that explore this kind of “in-between” emergence?

reddit.com
u/Particular_Ask7331 — 13 days ago
▲ 0 r/quantuminterpretation+1 crossposts

The equilibria of creation - how the laws of physics fell into existence

An essay on the thermodynamic origin of physical law

I. The Wrong Question

For centuries, physicists have asked why the laws of nature are what they are. More recently, the questions have grown sharper, exposing a strange specificity at the heart of things: Why three generations of fermions? Why does gravity couple universally? Why this gauge group, and not another?

These questions share a hidden assumption: that the laws are simply given, handed down from a deeper level of reality like commandments carved into a primordial substrate. In that sense, the search for fundamental physics has often been a theological pursuit — a search for the lawgiver behind the laws, a modern version of William Blake’s image of God as the geometer.

Carl Friedrich Gauss, the Prince of Mathematicians, seemed to embrace exactly this posture when he adopted a line from Shakespeare’s King Lear as his personal motto: "Thou, nature, art my goddess; to thy law my services are bound." In the classical reading, that is an act of piety toward a fixed, pre-existing order — a nature that stands above us as an eternal authority.

This cosmological origin story begins by reinterpreting that devotion.

We are bound to these laws not by the decree of a lawgiver, but by the same necessity that binds a river to its bed. The laws of physics were not given. They fell into existence. They are not commandments. They are equilibria.

II. The Only Unstable State

Imagine reality as a network of events or relations, where what happens is defined not by isolated substances but by interactions among systems. In such a world, discreteness arises because no two events can occur at the same instant in the same place. Relation comes first; geometry comes later.

Within that relational substrate, the most symmetric initial condition is total connectivity.

Total connectivity means every node, or possible subsystem, is linked to every other node. There are no preferred directions, no local structure, no gradients, no distinguished regions. Everything is adjacent to everything else. In such a state, the concepts of space, time, locality, and causality have not yet emerged, because each of them requires distinctions, and this state contains none.

Zero entropy is the natural companion of total connectivity. Entropy counts distinguishable macrostates and is especially well suited to thermodynamically large systems. A perfectly symmetric configuration admits only one. There is nothing to choose between, nothing to separate, nothing to remember.

This is the ground state of nothingness: the only condition consistent with the complete absence of information. It requires no design, no fine-tuning, no external cause. It is not a state that was created. It simply is.

And it is catastrophically unstable.

III. The Instability That Made Everything

Why should total symmetry fail? Because a large relational system governed by thermodynamic selection cannot remain frozen in a zero-entropy state. Under a maximum-entropy principle, the slightest fluctuation becomes a seed of differentiation.

A tiny asymmetry breaks global uniformity. Local structure appears. Local structure implies local constraints. Local constraints create entropy gradients. Entropy gradients drive further differentiation.

The process is irreversible. Once a distinction exists, erasing it costs, since computation is never free; the Landauer principle makes the reverse path inaccessible — not merely unlikely, but thermodynamically forbidden. The system cannot return to perfect symmetry. It falls forward, one irreversible bit at a time, toward structure, history, and law.

This was not the Big Bang in the usual sense of a hot plasma expanding into pre-existing space. Space did not yet exist. Time did not yet exist. What occurred was more primitive: the first informational asymmetry in an otherwise featureless relational network.

The Big Bang was not an explosion. It was a symmetry break.

IV. The Axioms as Attractors

The central claim is this: the axioms governing our physical universe are not imposed from outside. They are the stable attractors of the symmetry-breaking process.

As the zero-entropy network begins to differentiate, it does not do so arbitrarily. Maximum entropy constrains which configurations are accessible. Landauer cost constrains which transitions are irreversible. Local causal consistency constrains the topology.

From these requirements, five structural features become thermodynamically unavoidable:

  1. Finite local connectivity, because bounded node degree enforces locality, and total connectivity cannot persist at finite cost.
  2. Bounded update rates, because unlimited processing exceeds the informational budget.
  3. Hysteretic memory, because durable structure requires a distinction between reversible drift and irreversible change — here the Central Limit Theorem for large systems acts as the arbiter of emergence, governing the threshold where random fluctuation hardens into macroscopic law.
  4. Thermodynamic erasure cost, because computation is never free, and without such a cost there is no arrow of time.
  5. Maximum-entropy state selection, because every sufficiently large system tends to select the least-biased distribution consistent with its locally accessible constraints; any other selection principle would itself require explanation.

These five features — locality, finite processing, hysteretic memory, Landauer cost, and MaxEnt selection — are the five axioms of the thermodynamic emergence framework. They need not be postulated as arbitrary assumptions. They are the minimum stable structure a relational network develops once it begins to differentiate from a zero-entropy origin.

These axioms do not describe a fixed architecture. The relational network is not static — links appear, disappear, and rewire according to local update rules, always subject to finite capacity, bounded bandwidth, and the memory thresholds the axioms themselves establish. The microstructure is in constant flux.

Yet the large-scale geometry is stable. When the network is coarse-grained — when the fine-grained noise of individual rewiring events is averaged away — statistically persistent correlations remain. Space, in this picture, is not a fixed stage but a statistical summary: the large-scale shape that survives when transient fluctuations cancel out.

Geometry is what the network remembers. It is not what the network is.

The axioms are the first fossils of the Big Bang.

V. Laws as Equilibrium, Not Commandment

Once the five axioms are established, the evolution of the relational network follows a path of thermodynamic necessity. The network eventually crystallizes into its stable ground state: the tripartite attractor. This is the unique geometric resolution that simultaneously satisfies three competing imperatives — minimizing local stress, maximizing entropy, and maintaining structural stability under the irreversible updates of the substrate. This configuration is not a cosmic accident; it is the most efficient, lowest-energy symmetry organization possible for a relational system.

Within this framework, three-dimensional space is a thermodynamic mandate rather than an arbitrary setting. Higher dimensions are ruled out by an unsustainable buildup of interior stress — a state of informational congestion in which nodes are too densely connected to maintain distinct local gradients. Conversely, lower dimensions lack the topological robustness required to sustain long-range coherence; they are too fragile to support a complex universe. Three dimensions represent the Goldilocks zone: the only dimensionality that allows for scale-neutral stability, enabling the network to grow to any size without structural collapse.

From this specific 3D scaffolding, and the constraints it imposes on link persistence, the fundamental features of our universe — SU(3) color, chiral fermions, and their three generations — emerge as the primary topological eigenmodes of the network. They represent the limited set of symmetry structures robust enough to survive the thermodynamic pressure of ongoing evolution without being erased as heat.

The analogy is acoustic. A resonating body does not produce arbitrary frequencies; it produces the harmonics its geometry permits and damps the rest. In the same way, the three-dimensional relational network does not host arbitrary gauge groups and fermion families. It sustains only those symmetry structures whose topological cost is low enough to persist against the background noise of the substrate. Particles and forces are not laws inscribed on matter — they are the harmonics of a three-dimensional substrate: braids woven from relational links that the network cannot help but play.

This harmonic structure is precisely where quantum mechanics enters. The wave function describes the phase stress of the network — the tension between its current configuration and its persistent memory. The Born rule emerges as the unique MaxEnt condition for translating that stress into observable probabilities: the most unbiased mapping available, requiring no hidden informational preference that the substrate, in its ground state, does not possess.

Entanglement, in this light, is not a spooky mystery. It is a fossil — the residual connectivity of a network that was once totally connected, persisting as a structural memory of the zero-entropy origin. What we perceive as non-locality is simply the geometry of that memory: links that predate space itself, still intact.

The Standard Model, in this light, is not a catalogue of brute facts; it is a spectrum of the allowed. The Einstein equations appear as the macroscopic stability conditions of geometric stress, while the Schrödinger equation appears as the stability conditions of phase stress. They are not two unrelated laws, but two faces of the same thermodynamic imperative. What we call the laws of physics are the current equilibrium of an evolving substrate. They are stable, but they are not eternal.

VI. The Loose Axioms

In the early, far-from-equilibrium epoch following symmetry breaking, the network had not yet settled into its present structure. The axioms were loose. Different fluctuations could have led to different stable attractors, and therefore to different effective laws.

This is not the string-theory landscape with its vast catalogue of finely tuned vacua requiring anthropic selection. It is something more natural and more dynamic: a thermodynamic branching process. Different regions of the primordial network fall into different entropic basins, each producing a self-consistent set of effective laws. No fine-tuning is required — stability is its own selection principle.

Our universe is one especially stable basin in the free-energy landscape of a relational system falling away from perfect symmetry. Other basins are not parallel universes requiring exotic metaphysics. They are simply other ways the same fall could have ended.

VII. Wheeler’s Vision, Completed

The dream of digital reality is old, but John Wheeler gave it its most radical form when he asked for an idea so simple that, once grasped, we would wonder how it could have been otherwise.

He offered "It from Bit" — the insistence that reality is not built from stuff, but from information.

Wheeler was right, but the mechanism was left unspecified.

This story supplies it. The universe begins as a state of pure relation with no information: Wheeler’s ground of randomness, made precise. It begins at an unstable fixed point — the zero-entropy, totally connected state. Such a state does not require a cause to exist; in dynamical systems, fixed points simply are. What requires explanation is not their existence, but their instability — the inevitability of departure.

The first fluctuation is not governed by a law, because no laws yet exist. It is a genuine spontaneous break in perfect symmetry — the moment the system falls away from its unstable fixed point.

What follows is constrained by the very fact of falling. The constraints that emerge become the axioms, and the axioms govern all subsequent evolution. The laws of physics are the ruts worn into the landscape by the universe’s irreversible descent from its origin — persistent memories etched into the nervous system of reality.

Wheeler’s "It from Bit" becomes, in this picture, It from the forgetting of nothing.

The universe is what remains after perfect symmetry is irreversibly lost. Every particle, every force, every dimension is a memory of that loss — a scar left by entropy production on the face of a network that can never return to where it began.

VIII. The Question That Remains

There is one question this story does not answer, and honesty requires saying so.

Why was there a zero-entropy, totally connected initial state at all?

But perhaps that question is malformed. A state with no information contains no structure, no time, no causality. To ask why it existed is to smuggle in a prior time and a prior cause, even though neither exists before time and causality emerge.

The better question may be: is a zero-entropy, totally connected state the only self-consistent starting point for a relational universe? Is it the unique fixed point of backward evolution under MaxEnt dynamics?

If so, the origin story is complete. The universe did not begin in a particular state. It began in the only state that needs no explanation, because it contains nothing to explain.

The universe began with nothing. And from that nothing, by thermodynamic necessity, came everything.

Even the terminal equilibrium of heat death need not be a finality. Maximum entropy is not a graveyard of information, but a return to absolute symmetry — and thus to absolute instability. Within this vacuum of distinction, a rare but inevitable statistical fluctuation can shatter the global uniformity, triggering a new symmetry break and a fresh fall into structure. In this light, the "end" of one cosmos is merely the thermodynamic fertile ground for its successor. On the scale of a vast relational substrate, the Big Bang is not a unique miracle but a recurring scar — one more spontaneous differentiation in a network that can no more remain featureless than a supersaturated solution can remain clear.

IX. Conclusion

The laws of physics are not the rules of the game. They are the game learning its own rules as it falls away from the only condition in which no rules were needed.

That is the cosmological origin story suggested by the thermodynamic emergence framework. It is not a myth of creation. It is a framework seeking formal expression — one whose central claim is precise enough to be wrong, and whose architecture is coherent enough to be worth the attempt.

One honest concession must be named. The framework uses thermodynamic reasoning to explain the emergence of thermodynamic law itself — a circularity that is real. The tentative answer is that the tools — MaxEnt, Landauer cost, and the Central Limit Theorem — are not assumed as physical laws but as universal constraints on any sufficiently large system of distinctions, prior to and independent of the physics that eventually crystallizes from them. Thermodynamic reasoning simply distills macroscopic regularities from primordial chaos or noise where no underlying deterministic layer exists. Whether this answer fully dissolves the problem is a question the framework inherits, but, in the spirit Wheeler hoped for, it avoids an infinite regress of ever deeper deterministic explanations.

What it can say is this: the five axioms are not brute facts. They are the minimum stable structure that any relational network must develop as it differentiates from a zero-entropy initial condition. The Standard Model, general relativity, three-dimensional space, three generations of fermions, and the arrow of time are consequences of a universe that cannot stop becoming itself.

Wheeler asked how it could have been otherwise.

The answer is: it could not. Given nothing — given perfect symmetry, zero entropy, total connectivity — everything else was inevitable.

The universe did not begin. It fell away from the only state that needed no explanation.

reddit.com
u/MisterSpectrum — 17 days ago
▲ 0 r/quantuminterpretation+1 crossposts

Branches from coherence-graph fragmentation: a testable definition (paper + reproducibility suite)

TL;DR. I've been developing a definition of wavefunction branches as connected components of the coherence graph of ρ, partitioned by the Fiedler eigenvector of a coupling graph built from the Hamiltonian. Given five axioms (three of which are standard QM), all four of Riedel's criteria for quasiclassical branches follow as theorems, and the branches are stable under perturbation. The full pipeline is run end-to-end numerically with no Lindblad equation and no Born–Markov in the simulation — only exact unitary evolution + partial trace.

Github link: https://github.com/bnstlaurent-crypto/Defining-Wavefunction-Branching

Zenodo link: https://zenodo.org/records/19645822

A few questions I have:

  1. Is there a principled way to derive the S/E split (A4) from the Hamiltonian alone — e.g., via locality, tensor-product structure selection à la Carroll & Singh 2020, or something else? I'm stuck on this problem and don't see a way through it well.

  2. For k > 2 sectors, the paper uses sequential Fiedler bisection (each physical decoherence event is a k = 2 step). Is there a cleaner simultaneous multi-sector partition — or a counterexample where sequential bisection provably fails on a physical Hamiltonian?

  3. Where does this sit relative to Wallace's decoherent-histories account? I argue in §6 that coherence-graph fragmentation is strictly stronger (it gives the partition, not just consistency), but Everettians who know that literature better than I do will see things I don't.

As always, tear me up fam!

u/Sufficient_Course707 — 14 days ago