u/gimboarretino

It is quantum mechanics the fundamental description of how self-referential knowledge doesn't allow to be modeled deterministically?

A BRIEF PREMISE ABOUT SELF-REFERENTIAL KNOWLEDGE IN CLASSICAL SYSTEMS

It is a well-known thing that the predictability of deterministic models (not necessarily determinism itself, just its ability to be an adequate model and at the same deterministic) fails at the moment in which the prediction becomes part of the system that has been predicted, if such system is a system capable of knowledge and agency.

For example, it is surely possible to deterministically predict my spacetime coordinates tonight at 11 (will I be in bed or not). In principle, it is no different than predict the space-time coordinates of every other "events".

By having a good understanding of the laws and particles involved, by studying my genetics, neural pathways, my habits, my work rhythms, etc., a team of scientists could elaborate a very good model to know whether at 11 I will be in my bed or elsewhere. Evidently not 100% precise (it would perhaps require a semi-omniscient "laplacian" entity), but still reliably good. There more they are going to acquire information about me, my brain, about the enviroment in which I live and act etc, the better their predictions; suggesting that a "super-computer" able to collect and compute enough information could be able to make perfect or almost perfect predictions.

However, there is a very strange phenomena of self-referentiality; which is that if these predictions are made known to me, the predictions become unstable, because the knowledge of these predictions could determine in me the effect of violating them, contradicting them etc.

You could tell me: but the team of scientists could surely consider this effect too, to include in the prediction this variable, this desire of mine to prove that I am free thus do the opposite of what predicted and update the predictions accordingly, thus restoring the smooth deterministic evolution of my behavior.

True. However, this is valid as long as even this updated prediction does not become acquired as knowledge by me, because at that point I could falsify it again.

And so on, in regress. In a loop.

At the moment in which a true and adequate knowledge about my behavior becomes part of my system (I “entangle” myself with it, so to speak) that prediction, if framed according to a deterministic model, ceases to be adequate and reliable.

In other words, what was entailed to happen based on the previous states of the system/environment considered as causally relevant to determine a necessary "determinate" outcome, is no longer suitable nor sufficient to predict what will happen after that knowledge has been acquired by the system. What will happen afterwards is causally “not entirely determined or determinable” from what happened before. And even if you claim it is, you have to elaborate a new prediction that takes into account the effects of the first, and not "feed" this prediction 2.0 to the system.

*** *** ***

WHAT ABOUT QM?

Let us consider what is happening in a laboratory in which an experiment (a measurement) on a quantum system is carried out, a single system. Composed of the scientists, their brain's states, their knowledge about QM, the lab equipment, the measurement devices, and obviously the particle X that they are going to measure (spin up or spin down). System A.

This is a system endowend with predictive ability, and potentially, self-referential knowledge.

Well. This system is describable, "predictable", at a theoretical level, as a wave function that evolves deterministically, smoothly, according to the Schrödinger equation. And surely the more limited sub-set of this system, particle X, is describable as such.

But at the moment in which the particle is measured, what happens to the "deterministically unfolding" wave function? Do the scientists (or the measurment devices) acquire knowledge of the spin of the particle? No, partially incorrect. The system A (of which the scientists and both the particle are part, are entangled) acquires self-referential knowledge.

And what does this cause? The instant collapse of the wave function. If conceived as a physical event, that causes a lot of trouble. Hence the "measurment problem".

But if consider as an epistemic event, all problems are solved.

That is, the previously smooth deterministic evolution of the system (schroedinger equation) is no longer an adequate predictive model to describe in a complete way the entire system. The fact that is collapses literally mean... it collapses. It ceases to work as a valid epistemic tool.

What system A will do (under the limited perspective of spin up spin down, in our case) cannot be defined and described, predicted and modeled, in terms of a “necessary deterministic outcome”; it is not something entirely entailed and included in the previous states of the systems.

Not because of a special quantum event, but because the very same phenomena that happens classically with self-referential knowledge.

A "measurment" is merely self-referential knowledge feed to a system capable of such thing. And in such cases, deterministic markovian models simply fail.

reddit.com
u/gimboarretino — 5 days ago

Is creating reliable adeqate knowledge about itself what, ultimately, life/evolution/intelligence/knowledge is and aims to?

Let's take an event with specific space-time coordinates. In a deterministic universe, the space-time coordinates of that event are determined by the past states of the universe and are therefore, given sufficient information, predictable. This is the reason why we manage to accurately predict the motion of planets, calculate the trajectories of objects, and—if we were precise enough—even determine which side a coin would land on.

Fine.

With human beings, it is much more complex, but in principle, it shouldn't be any different. For example, whether I will be at space-time coordinates x tonight (let’s say in my bed at 11 PM) or not (coordinates non-x) is something already "entailed"—necessitated—by the past states of the universe. Granted, predicting it with certainty is difficult; however, with a solid knowledge of my habits, my genetics, my job, my hobbies, and my cognitive states, it is possible to make decent predictions even about this. One might not get a 100% score, but a good one.

Moreover, since my presence at coordinates x or non-x is something already predetermined (exactly like a spinning coin in mid-air is actually, given the circumstances, already predetermined to land on heads or tails), one could achieve a non-zero rate of correct predictions simply by guessing.

Now, there is a strange phenomenon. If I become aware of this adequate and valid prediction about myself (meaning the scientists' prediction about my future time-space coordinates becomes part of my self-referential knowledge), and if by chance I am in a "I could also do otherwise, I’m feeling rebellious" mode... the prediction becomes extremely unstable. It is easily subverted.

And you might tell me: but scientists can certainly become aware of this rebellious attitude by analyzing my neural acticity and update the prediction accordingly. Very true, and in that case, my behavior becomes highly predictable again (we know you’ll have the temptation to do the opposite of what I predict as x, therefore I will predict non-x).

BUT. In that case, my system did not relate to a true/adequate prediction, but to a false one. It wasn't adequate knowledge; it was a false prediction, even deceptive.

If I "entangle" myself with the true Prediction 2.0 (meaning I know that you know that I am rebellious thus you have predicted this rather than that), even this prediction 2.0 suddenly becomes unstable.

Self-referential loops, I believe they are called.

Now, this strikes me as a VERY strange thing to happen in a universe where events, determined by past states, simply unfold according to rules of previous necessity.

Why should systems equipped with self-referential knowledge (obtaining adequate knowledge of their own future states) render the prediction from stable to unstable? If the event "I will be in bed at 11 PM or not" is already written in the past history of the universe at those space-time coordinates, why is there this systematic huge alteration of probabilities and "smooth deterministic unfolding" depending on whether I acquire knowledge of how the event will materialize or not?

Sounds like if from adequate self-referential knowledge about the future, "novel" segments of causality, not completely entailed and necessitated (predetermined) in the previous states of the universe, could emerge.

reddit.com
u/gimboarretino — 6 days ago

Why does self-referential knowledge appear to have "weird effects" on the deterministic evolution of a system?

According to the classical view of determinism, if I had enough information about the past states of the universe (including my brain states etc.), I could predict everything. For example, whether tonight I will go to bed at 9 or after 9.

Suppose I build a "Laplace machine" that analyzes and computes all of the above and each day produces the above prediction.

Suppose I read this prediction every evening at 8. Let's say that today's prediction is "before 9 pm."

It seems strange to say that I am compelled to do what the prediction says, rather than that, once I have taken note of the prediction, I could act differently — and go to bed after 9. Perhaps that is the case, but "experientially" it runs against every intuition and experience we have.

One might reply: but the machine might have in fact predicted that you would go to bed after 9, since it also knew that you would have the impulse to defy the prediction in order to prove that you are free; therefore, the causal chain that resulted in your going to bed later also included the fact of you reading the prediction that you would go earlier.

That is perfectly fine; I agree.

BUT my point is: in this case the machine has not provided me with the correct prediction, but with a false one. In other words, I have not acquired true/adequate KNOWLEDGE of my future states.

So the issue remains: if I did have access to the correct and complete prediction, I could, arguably, had changed it again.

And you could certainly say once more: in that case the machine would also have predicted this, making a further sub-prediction. Sure, but again, it has not given me that further correct complete prediction, or I could defy that as well. And so in an infinite regress.

Nor can the machine resort to semantic tricks like the liar's paradox, or use a multiple-outputs model, like: "if I say you will go before nine, you will go after; if I say after, you will go before," or "I predict that you will do the opposite of whatever I predict," since these are not deterministic models. A deterministic model requires a unique, necessary final output deducible from the past states. After 9, or before 9. Not: if X then at 9, if Y then before 9. There is no real ontological "if" in a deterministic universe. I've asked you to locate a certain future event at given space-time coordinates, and you simply have to tell me where and when the event is compelled by its cone of causality to take place. Why can't you?

*** *** ***

Now, the machine in question of course does not exist and probably cannot exist, but what is argued above also applies when — let's say — a group of scientists is trying to make adequate predictions about my behavior. If I do not acquire/get entangled with those predictions, they will probably be very accurate; but if I do acquire those predictions, they will suddenly become very unstable.

Why??

Of course, here too one can include in the prediction the fact that I am in "rebel mood"; but if this factor/variable is also included in the prediction I acquire (so that I can, so to speak, rebel against my own rebel attitude), once again the prediction will turn out to be much less solid and accurate.

It is as if, the moment a system capable of having knowledge becomes part of — gets entangled, so to speak, with — a true/adequate prediction about itself (it acquires knowledge about its own future states), the smooth deterministic evolution that that system had before (and would have had if it had not become entangled) "collapses."

If the past states of the universe predetermine that tonight I will go to bed at 9, the fact that I acquire knowledge of this should not have these disruptive and "looping" effects.

What is the mechanism by force of which a deterministic adequate and complete prediction about a system "capable of knowledge", when such predictions becomes part of the system itself (that is, the system acquires self-referential knowledge about itslef) causes the system to cease to be deterministic? Or better, seems to create an emergent new causal chain that was not entailed and contained, that cannot be “extracted” from past physical states.

That's quite testable. But how it is explained?

reddit.com
u/gimboarretino — 8 days ago