• Register
PhysicsOverflow is a next-generation academic platform for physicists and astronomers, including a community peer review system and a postgraduate-level discussion forum analogous to MathOverflow.

Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion.

Please help promote PhysicsOverflow ads elsewhere if you like it.


PO is now at the Physics Department of Bielefeld University!

New printer friendly PO pages!

Migration to Bielefeld University was successful!

Please vote for this year's PhysicsOverflow ads!

Please do help out in categorising submissions. Submit a paper to PhysicsOverflow!

... see more

Tools for paper authors

Submit paper
Claim Paper Authorship

Tools for SE users

Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post

Users whose questions have been imported from Physics Stack Exchange, Theoretical Physics Stack Exchange, or any other Stack Exchange site are kindly requested to reclaim their account and not to register as a new user.

Public \(\beta\) tools

Report a bug with a feature
Request a new functionality
404 page design
Send feedback


(propose a free ad)

Site Statistics

205 submissions , 163 unreviewed
5,079 questions , 2,229 unanswered
5,348 answers , 22,758 comments
1,470 users with positive rep
819 active unimported users
More ...

  In 't Hooft beable models, do measurements keep states classical?

+ 18 like - 0 dislike

This is a questions on 't Hooft's beable models (see here: Discreteness and Determinism in Superstrings?) for quantum mechanics, and the goal is to understand to what extent these succeed in reproducing quantum mechanics. To be precise, I will say an "'t Hooft beable model" consists of the following:

  • A very large classical cellular automaton, whose states form a basis of a Hilbert space.
  • A state which is imagined to be one of these basis elements.
  • A unitary quantum time evolution operator which, for a series of discrete times, reproduces the cellular automaton evolution rules.

't Hooft's main argument (which is interesting and true) is that it is possible to reexpress many quantum systems in this form. The question is whether this rewriting automatically then allows you to consider the quantum system as classical.

The classical probabilisitic theory of a cellular automaton necessarily consists of data which is a probability distribution $\rho$ on CA states evolving according to two separate rules:

  • Time evolution: $\rho'(B') = \rho(B)$, where prime means "next time step" and B is the automaton state. You can extend this to a probabilistic diffusion process without difficulty.
  • Probabilistic reduction: if a bit of information becomes available to an observer through an experiment, the CA states are reduced to those compatible with the observation.

I should define probabilistic reduction—it's Bayes' rule: given an observation that we see produces a result $x$, but we don't know the exact value $x$, we know a the probability $p(x)$ that the result is $x$, the probabilistic reduction is

$$ \rho'(B) = C \rho(B) p(x(B)), $$

where $x(B)$ is the value of $x$ which would be produced if the automaton state is $B$, and $C$ is a normalization constant. This process is the reason that classical probability theory is distinguished over and above any other system—one can always interpret the Bayes' reduction process as reducing ignorance of hidden variables.

The bits of information that become available to a macroscopic observer internal to the CA through experiment are not microscopic CA values, but horrendously nonlocal and horrendously complex functions of gigantic chunks of the CA. Under certain circumstances, the probabilistic reduction plus the measurement process could conceivably approximately mimic quantum mechanics, I don't see a proof otherwise. But the devil is in the details.

In 't Hooft models, you also have two processes:

  • Time evolution: $\psi \rightarrow U \psi$.
  • Measurement reduction: the measurement of an observable corresponding to some subsystem at intermediate times, which, as in standard quantum mechanics, reduces the wavefunction by a projection.

The first process, time evolution, is guaranteed to keep you not superposed in the global variables, since this is just a permutation in 't Hooft's formulation, that's the whole point. But I have seen no convincing argument that the second process, learning a bit of information through quantum measurement, corresponds to learning something about the classical state and reducing the CA probabilistic state according to Bayes' rule.

Since 't Hooft's models are completely precise and calculable (this is the great virtue of his formulation), this can be asked precisely: is the reduction of the wavefunction in response to learning a bit of information about the CA state through an internal observation always mathematically equivalent to a Bayes reduction of the global wavefunction?

I will point out that if the answer is no, the 't Hooft models are not doing classical automata, they are doing quantum mechanics in a different basis. If the answer is yes, then the 't Hooft models could be completely rewritable as proper activities on the probability distribution $\rho$, rather than on quantum superposition states.

asked Aug 14, 2012 in Theoretical Physics by Ron Maimon (7,730 points) [ revision history ]
edited Feb 3, 2015 by Ron Maimon
Most voted comments show all comments
@annav: my argument is valid for ordinary QM - no idea if there's a better one for t'Hooft's model...

This post imported from StackExchange Physics at 2014-03-03 18:49 (UCT), posted by SE-user Christoph
@Christoph: That's a very nice argument for compatibility--- but it seems to be ultimately semiclassical, since you are relying on the classical equation of motions being exact. Assuming quantum mechanics gives you orthogonal structure, the symplectic structure is automatic in one of t'Hooft's formulations, in which he takes the formal classical system path integral (the Martin Siggia Rose formalism path integral for a Hamiltonian system) and phase rotates it to a different basis (as he always does in these things). The classical Hamiltonian gives you an additional symplectic structure.
Let me add a question, for comparison: My favorite "classical" theory is the planetary system, assuming that planets move as point particles under Newton's laws. Yoy can actually introduce non-commuting operators there as well. The "Earth-Mars exchange operator" puts Mars where Earth is and Earth where Mars is (and some simple rules about their velocities and moons). The eigenvalues of this operator are $\pm 1$. We can calculate how it evolves. Is this an observable?

This post imported from StackExchange Physics at 2014-03-03 18:49 (UCT), posted by SE-user G. 't Hooft
@Ron Is it possible that the question is "wrong"? Couldn't one take the opposite point of view? Couldn't the question be: is bayesian reduction in the CA states probability able to reproduce the projection to an eigenstate of an internal subsystem to some very good approximation? Isn't this the same thing as the decoherence interpetation of the "collapse of the wave-function" where the density matrix of the subsystem becomes diagonal because it gets entangled with a macroscopic apparatus that measures that observable? (just a thought, I didn't follow the details, I might be missing something)

This post imported from StackExchange Physics at 2014-03-03 18:49 (UCT), posted by SE-user Curious George

@CuriousGeorge: It is logically possible, and this is why I wasn't sure if 't Hooft's stuff was right.or not for a long time. I tried to rearrange what he was doing into a normal probabilistic form, and failed. Then when he started getting the impossible results--- local failure of Bell's theorem, reproducing exact QM with no decoherence--- these things that just can't happen, I understood that the only way you can get these things is if the projections aren't equivalent. It is very hard to prove, because the measurement operators are complicated and macroscopic.

Most recent comments show all comments
@annav complex numbers are ordered pairs of real numbers $(a,b)$ that satisfy the axioms of that algebraic structure.

This post imported from StackExchange Physics at 2014-03-03 18:49 (UCT), posted by SE-user Physiks lover
@Physikslover yes, but does the algebraic structure emerge from this CA model or is it imposed by hand?

This post imported from StackExchange Physics at 2014-03-03 18:49 (UCT), posted by SE-user anna v

1 Answer

+ 7 like - 0 dislike

I think the correct answer is that such models are both quantum mechanical and classical, although this could be considered as a question of semantics.

It is a fact that, as soon as you found a basis in your quantum system where the evolution is just a permutation, the "quantum probabilities" for the states in this basis, (as defined by Born's rule) become identical to the classical probabilities (indeed obeying Bayes' logic). Therefore it will be difficult to avoid interpreting them as such: the "universe" is in one of these states, we don't know which, but we know the probabilities.

The question is well posed: will it still be meaningful to consider superimposed states in this basis, and ask whether these can be measured, and how these evolve?

My answer depends on whether the quantum system in question is sufficiently structured to allow for considering "macroscopic features" in the "classical limit", and whether this classical limit allows for non-trivial interactions, causing phenomena as complex as "decoherence".

Then take a world described by this model and consider macroscopic objects in this world. The question is then whether inside these macroscopic objects (planets, people, indicators in measurement devices,...), our CA behaves differently from what they do in the vacuum. This may be reasonable to assume, and I do assume this to be true in the real world, but it is far from obvious. If it is so, then the macroscopic events are described by the CA alone.

This then would be my idea of a hidden variable theory. Macroscopic features are defined to be features that can be recognised by looking at collective modes of the CA. They are classical. Note that, if described by wave functions, these wave functions will have collapsed automatically. Physicists in this world may have been unable to identify the CA states, but they did reach the physical scale where CA states no longer behave collectively but where, instead, single bits of information matter. These physicists will have been able to derive the Schroedinger equation for the states they need to understand their world, but they work in the wrong basis, so that they land in heated discussions about how to interpret these states...

Note added: I thought my answer was clear, but let me summarise by answering the last 2 paragraphs of the question:

YES, my models are always equivalent to a "Bayes reduction of the global wave function"; if you can calculate how the probability distribution $\rho$ evolves, you are done.

But alas, you would need a classical computer with Planckian proportions to do that, because the CA is a universal computer. So if you want to know how the distributions behave at space and time scales much larger than the Planck scales, the only thing you can do is do the mapping onto a quantum Hilbert space. QM is a substitute, a trick. But it works, and at macroscopic scales, it's all you got.

This post imported from StackExchange Physics at 2014-03-03 18:49 (UCT), posted by SE-user G. 't Hooft
answered Aug 18, 2012 by gthooft (919 points) [ no revision ]
Most voted comments show all comments
@Ron: The same holds for the planets. I can try to figure out how the Earth-Mars exchange operator evolves, use Schroedinger equations. But in the very end, these Schroedinger equations simply permute probability states of the planets, so here it is easy to understand what the "quantum calculation" does: for the planets, it does nothing that you couldn't do the conventional way. For atoms, the conventional way (which now would amount to calculate the CA evolution) would be too difficult.

This post imported from StackExchange Physics at 2014-03-03 18:49 (UCT), posted by SE-user G. 't Hooft
From your previous comments, I only now realize what's bothering you. You seem to think that, also in the CA basis, one should look at inner products $\langle\psi_1|\psi_2\rangle$ and the absolute values of these inner products cannot directly be interpreted as probabilities in the CA. True, but those inner products will never occur in the final expressions. I insist on the postulate that macroscopic measurements also refer only to the CA states. The pointer on a measuring device is described by a CA state, since it is ontic. and Ron, then apply step 3 as I said above.

This post imported from StackExchange Physics at 2014-03-03 18:49 (UCT), posted by SE-user G. 't Hooft
I know that this is your intuition about what happens, that the measurement in the interior is just doing a projection to a new nice ontic state, but I think this is not right, that this isn't what happens, and it can't be made to happen by any fix. In your actual models, the negative inner-products are going to appear as actual amplitudes, not interpretable as probabilities, after you acquire knowledge and project the state.
It's not a question of "thinking that ..." or "intuition", I am talking of simple mathematical facts. As explained in step 3 above, all that needs to be computed in a CA, with states $|Q_i(t)\rangle$ is the amplitudes $\langle Q_i(t_{\,1})|Q_j(t_{\,2})\rangle$. They are 0 or 1 (mostly 0). I can compute this in any basis I like. These are unitary transformations. There's nothing more to it than that. But I am done. I terminate this discussion.

This post imported from StackExchange Physics at 2014-03-03 18:49 (UCT), posted by SE-user G. 't Hooft
You can terminate, but you stay wrong. I am not confused on this. Also, -1, for not adressing the question. This is where your models are busted.
Most recent comments show all comments
2) That means that classical observations all commute, which of course we already knew. The fact that they commute with the CA positions, as a bonus, explains why we perceive what some think is a "collapse of the wave function". Probabilities, Born's rule, they now all make sense.

This post imported from StackExchange Physics at 2014-03-03 18:49 (UCT), posted by SE-user G. 't Hooft
3) And now look at a quantum experiment. Atoms, photons, molecules are too small to affect collective behavior of a CA. My math tells me that one can map the CA states onto wave functions describing these atoms. The theory says: do that math. Solve those Schroedinger equs. You will end up with a wave function describing the final state. THEN go back and project that on the CA states in your detector. Voila, you did quantum physics to explain what your detector did. IT'S THE ONLY WAY TO EXPLAIN THAT!

This post imported from StackExchange Physics at 2014-03-03 18:49 (UCT), posted by SE-user G. 't Hooft

Your answer

Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
Please consult the FAQ for as to how to format your post.
This is the answer box; if you want to write a comment instead, please use the 'add comment' button.
Live preview (may slow down editor)   Preview
Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:
Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).
Please complete the anti-spam verification

user contributions licensed under cc by-sa 3.0 with attribution required

Your rights