Quantcast
  • Register
PhysicsOverflow is a next-generation academic platform for physicists and astronomers, including a community peer review system and a postgraduate-level discussion forum analogous to MathOverflow.

Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion.

Please help promote PhysicsOverflow ads elsewhere if you like it.

News

PO is now at the Physics Department of Bielefeld University!

New printer friendly PO pages!

Migration to Bielefeld University was successful!

Please vote for this year's PhysicsOverflow ads!

Please do help out in categorising submissions. Submit a paper to PhysicsOverflow!

... see more

Tools for paper authors

Submit paper
Claim Paper Authorship

Tools for SE users

Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post

Users whose questions have been imported from Physics Stack Exchange, Theoretical Physics Stack Exchange, or any other Stack Exchange site are kindly requested to reclaim their account and not to register as a new user.

Public \(\beta\) tools

Report a bug with a feature
Request a new functionality
404 page design
Send feedback

Attributions

(propose a free ad)

Site Statistics

205 submissions , 163 unreviewed
5,082 questions , 2,232 unanswered
5,353 answers , 22,789 comments
1,470 users with positive rep
820 active unimported users
More ...

  Discreteness and Determinism in Superstrings?

+ 9 like - 0 dislike
30853 views

So Gerard 't Hooft has a brand new paper (thanks to Mitchell Porter for making me aware of it) so this is somewhat of a expansion to the question I posed on this site a month or so ago regarding 't Hoofts work.

Now he has taken it quite a big step further: http://arxiv.org/abs/1207.3612

Does anyone here consider the ideas put forth in this paper plausible? And if not, could you explain exactly why not?

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user QuestionAnswers
asked Jul 17, 2012 in Theoretical Physics by QuestionAnswers (45 points) [ no revision ]
Most voted comments show all comments
@QuestionAnswers: Please, could you rephrase it one last time to ask for an answers regardless of whether or not someone has any "expertise"? The physics is objective, and it will get sorted out without experts to rule on right and wrong. It is sorted out by everyone checking it themselves, not by asking a guru.

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user Ron Maimon
Yeah, fixed it. I just hope some people come around and comment on it after having digested the paper

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user QuestionAnswers
Related: physics.stackexchange.com/q/18586/2451

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user Qmechanic
Can someone please quickly note what "CA" stands for? Wherever it was defined, I missed it.

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user AlanSE
CA is Cellular Automaton. I agree that people shouldn't use acronyms that don't belong to standard acronyms in physics, and CA surely isn't a standard acronym in physics.

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user Luboš Motl
Most recent comments show all comments
I don't think it is fair to say his ideas were characterized as crackpottery, they were just characterized as wrong. In my opinion, they are still just as wrong. The obvious thing about this paper is that it is dealing with the string world-sheet theory, giving a t'Hooft deterministic model for it, but this is not space-time physics, but world-sheet physics and there are insane constraints on making a world-sheet theory that essentially pick out strings as the unique theory. Since t'Hooft isn't adressing this, it is almost certain that this doesn't work, but in my opinion it's a good question.

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user Ron Maimon
@QuestionAnswers That makes it a bit better - perhaps you could edit the question to ask whether the ideas proposed in this paper are considered plausible (for example), rather than just saying "Opinions?"

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user David Z

10 Answers

+ 12 like - 0 dislike

I only see these writings now, since usually I ignore blogs. For good reason, because here also, the commentaries are written in haste, long before their authors really took the time to think.

My claim is simple, as explained umpteen times in my papers: I construct REAL quantum mechanics out of CA like models. I DO have problems of a mathematical nature, but these are infinitely more subtle than what you people are complaining about. These mathematical problems are the reason why I try to phrase things with care, trying not to overstate my case. The claim is that the difficulties that are still there have nothing to do with Bell's inequalities, or the psychological problems people have with entangled states.

Even in any REAL QM theory, once you have a basis of states in which the evolution law is a permutator, the complex phases of the states in this basis cease to have any physical significance. If you limit your measurements to measuring which of these basis states you are in, the amplitudes are all you need, so we can choose the phases at will. Assuming that such CA models might describe the real world amounts to assume that measurements of the CA are all you need to find out what happens in the macro world. Indeed, the models I look at have so much internal structure that it is highly unlikely that you would need to measure anything more. I don't think one has to worry that the needle of some measuring device would not be big enough to affect any of the CA modes. If it does, then that's all I need.

So, in the CA, the phases don't matter. However, you CAN define operators, as many as you like. This, I found, one has to do. Think of the evolution operator. It is a permutator. A most useful thing to do mathematically, is to investigate how eigenstates behave. Indeed, in the real world we only look at states where the energy (of particles, atoms and the like) is much below the Planck energy, so indeed, in practice, we select out states that are close to the eigenstates of the evolution operator, or equivalently, the Hamiltonian.

All I suggest is, well, let's look at such states. How do they evolve? Well, because they are eigenstates, yes, they now do contain phases. Manmade ones, but that's alright. As soon as you consider SUCH states, relative phases, superposition, and everything else quantum, suddenly becomes relevant. Just like in the real world. In fact, operators are extremely useful to construct large scale solutions of cellular automata, as I demonstrated (for instance using BCH). The proper thing to do mathematically, is to arrange the solutions in the form of templates, whose superpositions form the complete set of solutions of the system you are investigating. My theory is that electrons, photons, everything we are used to in quantum theory, are nothing but templates.

Now if these automata are too chaotic at too tiny Planckian scales, then working with them becomes awkward, and this is why I began to look at systems where the small scale structure, to some extent, is integrable. That works in 1+1 dimensions because you have right movers and left movers. And now it so happens that this works fantastically well in string theory, which has 1+1 dimensional underlying math.

Maybe die-hard string theorists are not interested, amused or surprised, but I am. If you just take the world sheet of the string, you can make all of qm disappear; if you arrange the target space variables carefully, you find that it all matches if this target space takes the form of a lattice with lattice mesh length equal to 2 pi times square root of alphaprime.

Yes, you may attack me with Bell's inequalities. They are puzzling, aren't they? But please remember that, as in all no-go theorems that we have seen in physics, their weakest part is on page one, line one, the assumptions. As became clear in my CA work, there is a large redundancy in the definition of the phases of wave functions. When people describe a physical experiment they usually assume they know the phases. So, in handling an experiment concerning Bells's inequalities, it is taken for granted (sorry: assumed) that if you have measured one operator, say the z component of a spin, then an other operator, say the x component, will have some value if that had been measured instead. That's totally wrong. In terms of the underlying CA variables, there are no measurable non-commuting operators. There are only the templates, whose phases are arbitrary. If you aren't able to measure the x component (of a spin) because you did measure the z component, then there is no x component, because the phases were ill-defined.

Still, you can ask what actually happens when an Aspect like experiment is done. In arguments about this, I sometimes invoke "super determinism", which states that, if you want to change your mind about what to measure, because you have "free will", then this change of mind always has its roots in the past, all the way to time -> minus infinity, whether you like it or not. The cellular automaton states cannot be the same as in the other case where you did not change your mind. Some of the templates you use have to be chosen different, and so the arbitrary phases cannot be ignored.

But if you don't buy anything of the above, the simple straight argument is that I construct real honest-to-god quantum mechanics. Since that ignores Bell's inequalities, that should put the argument to an end. They are violated.

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user G. 't Hooft
answered Aug 9, 2012 by gthooft (919 points) [ no revision ]
Thanks for taking the time to contribute to clarify the details of the paper, please don't take the haste as something personal, psychologically its easier to dismiss something new than to devote time to the subleties. As one of my professors used to say, people hardly understand something new without a fight. In any case, welcome to physics.stackexchange!

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user lurscher
There's an opportunity for someone here: try to explicitly construct EPR states, GHZ states, etc, in the field theory described in this paper, and thereby show that it can or can't be done. But act quickly, or Gerard 't Hooft himself might get there first, once again.

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user Mitchell Porter
-1: This is rubbish--- you are not getting quantum mechanics out, you are putting it in! If you were really doing a classical automaton, you could show how the phases and superpositions arise from pure probability. You are doing nonsense, it is not right, as I explained in the answer. I am not misunderstanding it, I understand it perfectly. It took a long time, because it was so wrong.

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user Ron Maimon
I have to say that I do not see how the time evolution of a locally deterministic CA can violate a Bell inequality. If you had the equivalent of an EPR experiment, where the filter orientations were controlled by pseudorandom number generators, there just shouldn't be a way to get all the right counterfactuals without cheating (by finetuning the CA initial conditions, case by case).

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user Mitchell Porter
That these papers contain CAs which are equivalent to QFTs (in the limited sense that CA evolution maps to eigenvalue evolution for certain states and observables in the QFT) is no guarantee that it will even be possible to construct a "Bell scenario" in the QFT. The May paper describes a free field theory - good luck building a "detector" with that physics! - and the July paper a type of "interacting" string theory whose potentials are still unknown. The conservative prediction is that there will be no-go theorems explaining why these models, though quantum, aren't counterexamples to Bell.

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user Mitchell Porter
So basically, I consider this work an important contribution to the cause of realism in QM, it breaks new ground there. But I don't think you'll get quantum-like nonlocality from a local CA unless there is some nonlocality in the transformation from CA grid coordinates to space-time coordinates.

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user Mitchell Porter
I've gone through Prof 't Hooft's answers on my blog as well as on this forum and I've read his latest paper. There isn't any logic in any of these writings. In quantum mechanics, superpositions are allowed and phases always matter. It's a totally elementary fact that quantum mechanics cannot work without complex numbers, see e.g. physics.stackexchange.com/questions/32422/… and many other threads in this very forum. Someone trying to make it real or ban superpositions etc. etc. is fighting against totally basic insights of QM from the 1920s.

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user Luboš Motl
@Luboš Motl: I am afraid your arguments against real numbers in quantum theory are not quite sufficient. As I mentioned elsewhere, Schrödinger (Nature (London) 169, 538 (1952)) noted that one can make a scalar wavefunction real by a gauge transform. Furthermore, the Dirac equation is generally equivalent to an equation for just one real component ((akhmeteli.org/wp-content/uploads/2011/08/JMAPAQ528082303_1.pdf (an article published in the Journal of Mathematical Physics) or arxiv.org/abs/1008.4828 ).

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user akhmeteli
The reasons why wave functions have to be complex, not real, are trivial and numerous, see e.g. this new essay of mine, motls.blogspot.ca/2012/08/…

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user Luboš Motl
@LubošMotl: Your criticism of t'Hooft's paper comes from a lack of understanding (I had the same issues for a long time). He isn't saying phases don't matter, or complex numbers aren't used. What he is saying is that the exp(-itH) for a discrete timestep t is a permutation on a special basis, so that whatever phases you choose for the global wavefunction is irrelevant, because they never interfere. But the global wavefunction isn't useful for anything, you need to use local observables to define your local states, and these can be superposed. There is logic here, although I disagree with it.

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user Ron Maimon
@G.'tHooft arrogance ? "if you want to change your mind about what to measure, because you have "free will", then this change of mind always has its roots in the past, all the way to time -> minus infinity, whether you like it or not" ...then there is only one possible world "have gone any differently than they in fact have gone"

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user user12103
+ 9 like - 0 dislike

I'll briefly respond to these critics in the order I read them.

To Mitchell:

Since I reconstruct ordinary QM, I can make any state I like, including EPR states, GHZ states or whatever. At the level of the CA, most of these states will be blurred to some extent, and they contain complex phases that may seem to be physically meaningless. But, well, that's what you asked for. But read my answer to Ron as well. Now of course, your point is well taken, it would be nice if one could follow in great detail what actually happens when an EPR experiment is done. This is hard. It is easier to illustrate interference experiments at a more basic level. In principle, I see no obstacles at all.

To Ron:

If you really understood me 'perfectly', you would not have said that I am "putting [quantum mechanics] in". What I am putting in is the quantum states, see previous answer (as well as later explanations). You may continue calling me names for that. But what I get out is that these states obey Schroedinger equations. You could at best object that the Schroedinger equations will give the correct solutions of the CA equations only at integer time steps, but if these steps are as small as the Planck time, then that's good enough for any experiment.

So the CA obeys QM equations that would agree with conventional QM at integer time steps, or equivalently, as long as you limit the separations of your energy levels to being amounts much less than the Planck energy. Good enough for any of today's experiments. Well if you still think you understood me perfectly, please go back to sleep.

To Mitchell's next statements:

Those random number generators are deterministic, like everything in my theory. So their outcome does depend on the past, whether you like it or not. The mapping from the CA states to the template states used in QM implies that any modification in your random number program will end up describing the scenery in terms of totally defferent CA states.

Then, as I has already stated, the phases of the template states (IN THE CA BASIS, OF COURSE!) are unphysical, and this means that, as in real qm, you can't specify the outcomes of measurements of two non-commuting operators at the same time. Yes, if you forget to look carefully at the CA states, it looks like cheating, like finetuning the initial conditions, but it is not; since you can't move from one universe into another, the initial conditions must be that the CA is in one precisely defined mode at all times. This means that, at all times, the universe is in one exactly defined "quantum" state. It is the state in which the automaton's observables are diagonalised and are in one eigen state only. Any "superposition" of two or more of such states is not ontological anymore. But, what seems to confuse most of you, is that nevertheless the Universe's wave function obeys a linear Schroedinger equation. Superpositions are allowed in the 'template' states ...

"... these models, though quantum, aren't counterexamples to Bell" ? So, I got through to you halfway. My models indeed are quantum. All I have to convince you of next, is that any quantum state is admissible as a probabilistic description of the CA, so there is no obstacle against creating the initial conditions of a Bell experiment, and remember that the template states are complicated superpositions of CA states again, so these indeed lead to apparent interference phenomena.

Ron makes the remark that my "assumption is completely unjustified" that one can take superimposed states of the CA. Please Ron, think again. The "superpostion" is nothing but a probabilistically smeared state, and because the CA merely permutes its ontological states, this probabilistically smeared state evolves in line with both qm and probability theory. So there is no objection at all ! And this allows me later to go to another basis, that of the templates, any way I please !

What confuses him is, that at the level of the CA all of this is so trivial. What makes my theory non trivial is the subsequent transformations in Hilbert space. It's like shouting that the emperor doesn't have any clothes on, please wake up.

"... If you don't know which basis you are in, you describe this lack of knowledge by a probability distribution on the initial state, not by probability amplitudes", he says. Wait a minute: why not? I admit that the phases of the amplitudes there don't seem to do much, but that's deceptive; the phases allow me to make my mathematical transformations. It's a trick, yes, but a very handy one! And AFTER these transfomations are done, one DOES get quantum superpositions out.

I have explained why I want free theories to start from. The CA models I had used previously had interactions that are so strong that doing math with them gets to become too complicated. So, I start with non-interacting systems. Leave the (deterministic) interactions for later.

The world sheet lattice is not conformally invariant. You have a point there, and indeed, I now think that one has to replace that lattice with a continuum at some stage; I do not think this changes things very much, the lattice can be taken as tiny as one wishes.

This is also my answer to Chris: "Where is the mathematics behind all of this?" He seems not to like the lattice cut-off of on the string world sheet. Well, we commit such crimes in all our quantum field theories: give them a lattice cut-off and then send the lattice to the continuum. I admit that I haven't explored yet how this goes in practice in string theory. The commutation rules and constraints have to be taken into account carefully. What I observed is that the size of the world sheet lattice doesn't matter; the target space lattice keeps a fixed lattice mesh size.

Chris thinks I am doing metaphysics. Well, I always thought that much of string theory is metaphysics, where one jumps from one conjecture to the next. I found it to be a great relief to discover that string theory generates a well-defined lattice in target space. Let me add that in the paper I put the lattice in Minkowski space, but this might not be right, or at least not useful. It may well be better to keep Minkowski time continuous.

Ron also remarked that "the worldsheet is totally nonlocal in space time". What makes him say that? If you have a bunch of closed strings, and if these behave classically when considered on a space(time) lattice, then that's local in the classical sense. Of course, strings spread a bit, but only at Plankian scales.

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user G. 't Hooft
answered Aug 11, 2012 by gthooft (919 points) [ no revision ]
Most voted comments show all comments
I just noticed the bit at the end: the reason that strings are totally nonlocal in spacetime is that there are no local interactions of the worldsheet in space-time, the interactions are by topology, and they only turn into a process of splitting and joining in Mandelstam light-cone picture. The strings are defined nonlocally as transformations of in states to out states, and the string sum over the worldsheet is not causal in any normal sense--- the worldsheet goes back and forth in time. It's only a Hamiltonian theory in light cone.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user Ron Maimon
Come on, learn some more about string theory. The perturbative system (that is, the string loop expansion) is perfectly causal.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user G. 't Hooft
No it isn't. This is just false--- you need a string field Hamiltonian picture, and this is only available in light cone. It's acausal in the same way Feynman diagrams are, so it's not something one is unused to. This is a central paradigmatic difference between string theory and field theory, and it was swept under the rug in the 1980s, although it was known to practitioners (who considered it an embarassment that they didn't have a Hamiltonian). In hindsight, it is the right thing, since the structure of the theory needs holography. Ordinary field theory Feynman diagrams are acausal too.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user Ron Maimon
@Ron, OK, maybe this is again just a question of semantics. I would say that Feynman diagrams represent a causal QFT (if done correctly), no matter which gauge choice or coordinate choice you use. Same for string theory.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user G. 't Hooft
This is true for Feynman diagrams in field theory, they represent a causal QFT, because you have a basis of local field operators. It is not true that the string diagrams represent a local field theory within string theory because you don't have local field operators. It is still 'causal' in the Mandelstam sense, of analyticity of S-matrix, and in the AdS/CFT sense that the boundary CFT is causal in the usual field theory way.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user Ron Maimon
Most recent comments show all comments
And the automaton state is also classical, so when you observe the measuring device, and you gain information, and you partially probabilistically collapse the CA state, how the heck can you get phase superpositions of CA variables describing the thing? The way you do it it is to just assume that measurement and projection are as in QM, because the state space and evolution equation are as in QM (because you chose to describe it this way), but there is no argument that says how to determine the phases from the probability distribution (these are arbitrary). Since all you have on the CA is

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user Ron Maimon
probabilities. I will try to give a precise example: suppose you tell me that the state where an electron has spin up has probability distribution $\rho_1$ on the CA variables, and the state where it has spin down has probability distribution $\rho_2$. What is the probability state corresponding to all the superposition states? To construct this, one has to have the freedom to make an SU(2) transformation on $\rho$'s, and these $\rho$'s are not in a very symmetric space--- they have pointy corners (the probability states where you know the CA state for certain). How do you get SU(2)?

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user Ron Maimon
+ 8 like - 0 dislike

(I apologize if this comment pops up twice, I don't quite understand how it works here)

To Ron:

Don't worry about the authority issue, It's fine with me if you don't take my authority for granted. But it helps if you look at my papers more carefully.

Back to the issue: remember that the true, "ontological" state of the universe is assumed to be one single mode of the CA. No superpositions, ever. But then, we make a basis transformation. We've learned this when doing quantum mechanics, so we do it all the time. All I ask is: consider the CA as a system in a special basis, call that the "ontological" basis. Now consider some transformation to a different basis. The simplest such transformation is a (discrete or continuous) Fourier transformation, but in the real world, probably, the transformations needed will be much more complicated. After you've done that, you'll find that the time evolution in that basis, like in any basis, is described by a Schroedinger equation. But, because of these transformations, all states you will encounter from now on will be quantum superpositions of CA states. This does NOT mean that now the universe is in a superposition, it simply means that the states we use, I call them templates, are superpositions. Well, this means that if you transform back, the CA states of the universe are superpositions of our template states.

This is how superpositions come about in my theory.

You argue: "Suppose you tell me ...", no, I didn't tell you that. You are exactly repeating the basic error people commit when arguing away hidden variable theories. This is what I mean when I claim that what's wrong with Bell like arguments is on page one, line one, the assumptions. The difference between an electron with spin up and an electron with spin down is only one bit of information, also for the CA. Ontologically, the CA is never in a superposition. Our description of it is, because of our lack of knowledge.

Only after you measured the spin, up, down, sideways, whatever, the macroscopic measuring device is in a CA state that is pronouncedly different depending on the outcome. But also when you rotated the measuring device to observe spin in a different direction, you made a colossal change in the CA configurations.

Maybe the best way of phrasing the answer to your question is: the rho_1 and the rho_2 differ by many bits of information due to the fact that your measuring device is different in these two worlds, but by only one bit of information that corresponds to the outcome of the measurement. Actually, rather than rho_1 and rho_2, I would be inclined to give you set_1 and set_2, where these sets contain many ontological values of the CA. If you decide to switch the orientation of your measuring device, set_1 and set_2 have no element in common. There is one bit of information in set_1 that gives the outcome of the experiment, and one bit in set_2 giving the outcome of the experiment there. There is no overlap, but, by ignoring the environment, our 'template states' which are referring to the electron only, are superimposed. The phases of these superpositions are meaningless, because set_1 and set_2 do not overlap.

Too few CA states to factor big numbers ... bravo, this the one point where my theory makes a prediction, and I mentioned this in some of my papers: my prediction is that there will be difficulties to fabricate the 'perfect' quantum computer. You know that the quantum computer is based on two conflicting requirements of its physical system: you need the absence of interactions in order not to disturb the quantum coherence of states, while interactions will be needed to read off what the states are. My prediction is that the CA underlying our physical world will generate interactions that cannot be tuned any sort of way, so the space between Scylla and Charybdis is finite, and will generate failures in the quantum computer.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user G. 't Hooft
answered Aug 11, 2012 by gthooft (919 points) [ no revision ]
"The difference between an electron with spin up and an electron with spin down is only one bit of information"... What about the difference between one orientation of a polarization filter, and another orientation? The number of bits there is bounded only by the angular resolution you can achieve. It's a lot of information to hide in the vacuum or in the choice of phases.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user Mitchell Porter
The problem is that your theory doesn't predict that quantum computers fail! You only say that it does. There is no barrier to doing a full quantum computation in your systems using your discrete permutation exp-Hamiltonians. This is one reason why I am certain that they are not equivalent to CA's. The claim that you are "merely" doing a basis rotation is false--- it is true mathematically, but it is false that this rotated basis describes any of the ontological CA states. What it does describe are hypothetical superpositions of the CA states, that should not appear in a true CA theory.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user Ron Maimon
Also, you can edit your 3 answers into one answer, this is how it is done.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user Ron Maimon
Apologies, I still don't know when a comment becomes visible to other readers and when not, and how to put a comment where. I regret if my answers are not in a causal order, if I don't want to edit 3 answers into one.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user G. 't Hooft
@G.'tHooft: It's ok, you can figure out the order from the dates. But these little comments are temporary, and they might get erased at some point entirely, unfortunately. The multiple answers are good in one way, in that it allows people to give you reputation which allows you to do more things.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user Ron Maimon
+ 7 like - 0 dislike

To Ron:

The difference between the automaton states representing a filter in one direction, and a filter that is slightly rotated, is huge, because these systems are macroscopic.

Now you might wonder however, whether, in principle, we could be dealing with a device that rotates the filter in response to the outcome of the measurement of some quantum object. like the spin of some other electron. In that case, a difference in what might have been a single cell in the automaton, has grown into a macroscopic deviation (compare a classical mechanical system with a positive Lyaponov exponent). But by the time we are able to measure the electron with a rotated filter, the difference has become macroscopic.

A lot of information to hide in the vacuum? Not at all, if the vacuum can be imagined as a CA in a chaotic mode, and if indeed the meshes of the space-time lattice are of the order of the Planck scale. You can put huge amounts of information there.

Please remember that, if you superimpose two states of, say, an electron, you are not really superimposing two states of the automaton, but you are superimposing two states of the templates you are using, in order to get the best template to describe the new situation, which in reality is an automaton in a state that differs from both others; it isn't a superposition. This is what I tried to explain in my paper on the "Collapse of the wave function and Born's rule". I found that we have to work with sets that represent allowed states of the automaton. Since we do not know exactly the initial state, we can use the rule that the probabilities are proportional to the sizes of the sets. This is what I concluded when using density matrices to see how states get smeared due to decoherence effects in the environment. A system interacts weakly with its environment, and smears its states a bit. When we do a measurement, we ignore the states of the environment.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user G. 't Hooft
answered Aug 12, 2012 by gthooft (919 points) [ no revision ]
Most voted comments show all comments
@LubošMotl rotations don't commute in classical physics either, but infinitesimal ones do. Isn't it therefore the error tending to zero that matters when commuting observables in classical physics?

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user Physiks lover
@Physikslover: Lubos has a stronger point--- any quantity you can imagine about the universe is definite in a CA state, meaning if you know the CA state, you know the quantity. So consider a one-bit two-dimensional Ising model CA living on top of a black-white checkerboard pattern. The observable bit "the total number of 1's is even" and the observable "the total number of 1's on black sites is even" both necessarily commute--- learning the value of one doesn't depend on the other. Similarly for learning the value of any function which is definite on CA.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user Ron Maimon
@LubošMotl: I never claimed these musings work, all I say is that I can't rule it out. It is difficult essentially exactly for the reason you state, how the heck do you get randomization of one observable in response to learning another? This is not a no-go, essentally for the reason t'Hooft states, there is conservation of entropy on CA, so learning certain observable value in an entropy conserving way randomizes others. But you shouldn't do it by formally making an enormous Hilbert space, and rotating basis, because there is no reason to suppose rotated states are realized.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user Ron Maimon
@LubošMotl: Here is an example of two classical probability "observables" on a 4-bit classical system, b1,b2,b3,b4. I require there to be an equal number of 1's and 0's. Observable A tells you if b1==b2 and swaps bit 3 and 4 in the process (projection plus stochastic transformation), while observable 2 tells you if b1==b3 and swaps bit 2 and 4. These observables don't commute for a while, and then settle down once you do enough measurements to learn the whole state. If you have a larger and larger number bits, it takes forever to settle down, and they are always noncommuting.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user Ron Maimon
(LubošMotl)..."your musings about QM's "emergence" from classical CAs are as nonsensical as those by G. 't Hooft. The logical frameworks of quantum physics and classical physics are entirely different"... ___who knows ? Reconstruction of Gaussian quantum mechanics from Liouville mechanics with an epistemic restriction arxiv.org/abs/1111.5057

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user user12103
Most recent comments show all comments
This gives you a certain number of bits of information, and this means that the information in the perturbation is not small (but small compared to the total number of bits in the CA, of course). Under these circumstances, I wasn't able to show that the quantum-like aspects are preserved (complex eigenvalues and the like). The disheartening thing is that you are doing basis transformations willy nilly that I see as only justified for states which are perturbations of the steady state, in arbitrary cases where the CA is doing anything it wants at all, and then it is already quantum, it isn't

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user Ron Maimon
quantum emerging from probability. But when I read your intuitions about vacuum entanglement, and how the measurement selects the right outcome, it is the same ideas as the ones that come from the perturbation scheme. So this is what confused me--- the perturbation idea and your "just use a Hilbert-space anyway, even though it doesn't mean anything" idea, are close in intuition, but different in details, in that your method makes no assumption about the variation in the CA probability distribution being slow or smooth.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user Ron Maimon
+ 7 like - 0 dislike

To Ron:

Maybe we are getting somewhere. You say:

"Taking a formal Hilbert space, asserting that one has an unknown ontic state, and then formally defining operators is not justified..."

Wait, isn't that what we always do in science in general and in QM in particular? We concoct a model, conjecture an evolution operator, and ask how any initial state evolves? My model just happens to be a CA, my evolution operator just happens to have only ones and zeros, and that only in a very specially chosen basis, and, well, who knows what nature's ontic state is?

I find that if the universe starts out in just any CA state, it continues to be in exactly one CA state. This is all I do. There's a superselection rule: you can't hop from one CA mode into another.

There is some freedom in choosing the eigenstates of H. If a system has a discrete time variable, you can keep the eigen values within an interval. The only constraint delivered by the CA theory is that the levels form sets of levels such that within each set they are equally spaced (they are the discrete harmonic oscillators, or more precisely: periodic sub systems).

My earlier CA models indeed had perturbation expansions where convergence was an issue. In attempting to get models that I can use to answer your (and my own!) questions, I was demanding too much. But I don't think I understand exactly what you try to say in your last paragraph. There is the vacuum state, a formal superposition of many CA states whose coefficients are stationary, and there are perturbations around that. Earlier models had the problem that the excitations above the vacuum state hardly look like particles, as not only we have no Lorentz invariance, but not even Galilei invariance, which was a nuisance, although it has nothing to do with the real quantum issues addressed here. My latest ideas about superstring theory are much better in this respect. My work on that is not finished, but rotation invariance and Lorentz invariance seem to be quite possible here.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user G. 't Hooft
answered Aug 13, 2012 by gthooft (919 points) [ no revision ]
I know what you are doing, you don't need to say it a thousand times. I have no problem with CA, or law of evolution, that's fine. The main problem is "formally defining operators", which you then use as in ordinary QM. The problem is that you have only a CA evolving. This means you can define probability distributions on the CA, and you can do operations on the $\rho$, but if you go on to define a Hilbert space on the CA, you need to tread with care, because it will be easy to do operations on the Hilbert space that take you out of the space of probability distributions on the CA.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user Ron Maimon
I agree that time evolution won't do that, but preparation of superpositions of intermediate variables looks like it does. Your claim is probably "but if the total H is just a permutation in one basis, how can a state preparation inside the system, so to speak, knock you into a nontrivial global superposition?" The reason it isn't clear is that the projection operators corresponding to doing a measurement in the interior aren't necessarily respecting the probability space structure.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user Ron Maimon
This is exactly the same as saying you can surely prepare states that violate Bell's inequality and do quantum computation in your system, even though you say you can't, because the global exp-H won't let you. The global exp-H is irrelevant if the intermediate information you get from measurements projects you into a state which is no longer diagonal on the global variables. Such a state can never arise starting from a probability distribution (or equivalently from an unknown ontic state). So there is a restriction on the quantumlike dynamics you can get, a real restriction, it's not QM.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user Ron Maimon
When I said "perturbation" I didn't mean perturbation theory, but this: start with a probability distribution $\rho$ on automaton states, and consider $\rho + \Delta \rho$ perturbations to an initial stationary distribution. The evolution of long-wavelength perturbations has a lot of features in common with QM, and is intuitively similar to your ideas. But it doesn't allow you to go on and define a Hilbert space in any obvious way. I mean, you formally can do it, the way you do in what you do, but the natural operations on the probability distribution corresponding to learning a bit

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user Ron Maimon
of information about the probability distribution doesn't ever produce a global superposition, it's known from probability theory what this does--- it projects you according to probability law. So there is a breakdown of QM in this type of model, and yet I don't know if it even reproduces QM, because you need to embed the Hilbert space into the classical probability space, which might or might not be possible. I can't prove it one way or the other, but having a formal embedding is really annoying, since it distracts from this, which I think is the main question.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user Ron Maimon
+ 6 like - 0 dislike

EDIT: Explanation in light of 't Hooft's answers

I have been getting downvotes, possibly because people percieve a disconnect between the comments I made in response to 't Hooft's answers, and the content of this answer. The two sets of statements are not incompatible.

I would like to say where I agree with 't Hooft:

  • I don't think hidden variables are impossible.
  • I do think that it might be possible to reproduce something approximately like QM from something which is exactly a classical automaton. (I give it a 50% chance of working, I can't do it yet, but it looks possible, and if it is possible, I give it an 80% chance of being true, therefore overall, I give a 40% chance to this scenario.)
  • I don't think other people's criticism of his program is valid, because people tend to believe hidden variables are just plain impossible, and I don't see a proof. The proofs are for local hidden variables or for naive hidden variables.

My criticism is not of the general program, it is of the precise implementation, as detailed in this paper and previous ones. The disagreements come from the mismatch between the Hilbert space that t'Hooft introduces without comment, as a formal trick, and classical probability space:

  • 't Hooft considers the space of all possible superpositions of states of a classical automaton, plus an exponentiated Hamiltonian that reproduces the automaton behavior on a discrete time. This Hilbert space is formal, not emergent, it is a trick for rewriting probability distributions.
  • 't Hooft says that so long as the basis states evolve according to permutation, there are never any superpositions in the global states. But he then goes on to discuss operators whose eigenvectors correspond to definite states of interior subsystems, and he claims that it is possible to prepare superpositions of these subsystems using these operators. The process of measuring these operators does not, as I see it, necessarily have a clear meaning in terms of the no-superposition global states, and it does not correspond to a classically allowed operation on the CA involved.

If it is possible to get quantum mechanics from CA, then I agree with nearly every intuitive statement 't Hooft makes about how it is supposed to happen—including the "template" business, and the reduction to Born's rule from counting automata states (these intuitions are horrendously vague, but I don't think there is anything wrong with them), I only disagree with the precise stuff, not the vague stuff (although if QM does not ever emerge from CA, the vague stuff is wrong too, in that case, I would just be sharing 't Hooft's wrong intuition). There is a slight difference in intuition in that I think that the violation of Bell's theorem comes from nonlocality not from superdeterminism, but this is related to the precise implementation difference in the two approaches. I will focus on the disagreements from now on.

Probability distributions on CA

Consider a CA where we know the rules, we know the correspondence between the CA and the stuff we see, but we don't know the "ontic state" (meaning we don't know the bits in the CA). We make a probability distribution based on our ignorance, and as we learn more information from observation, we make a better and better probability distribution on the CA. This is the procedure in classical systems, it can't be fiddled with, and the question is whether this can ever look like quantum mechanics at long distances.

Luboš Motl asks the fair question—what is a noncommuting observable? To describe this, consider a system consisting of $2N$ bits with an equal number of zeros and ones. The measurement $A$ returns the parity of the number of $1$'s in the first $N$ bits, and performs a cyclic permutation one space to the right on the remaining $N$ bits. The measurement $B$ returns the parity of the number of $1$'s in the bits at even-numbered positions (it's a staggered version of $A$) and permutes the odd bits cyclically. These two measurements are noncommutative for a long, long time, when $N$ is large, you need order $N$ measurements to figure out the full automaton state.

Given a full probability distribution on automaton states $\rho$, you can write it as a sum of the steady state (say uniform) distribution and a perturbation. The perturbation behaves according to the eigenvalues of the linear operator that tells you how probabilities work, and in cases where you have long-wavelength measurements only (like the operators of the previous example), you can produce things that look like they are evolving linearly with noncommutative measurements that vaguely look like quantum mechanics.

But I can't find a precise limit in which this picture reduces to QM, and further, I can't use 't Hooft's constructions to do this either, because I can't see the embedding of Hilbert space precisely in the construction. It can't be a formal Hilbert space as large as the Hilbert space of all superpositions of all automaton states, because this is too big. It must be a reduction of some sort of the probability space, and I don't know how it works.

Since 't Hooft's construction fails to have an obvious reinterpretation as an evolution equation for a classical probability density (not the Hamiltonian—that has an obvious interpretation, the projections corresponding to measurements at intermediate times), I can't see that what he is doing is anything more profound than a formal trick, rewriting QM in a beable basis. This is possible, but it is not the difficult part in making QM emerge from a classical deterministic theory.

If you do it right, the QM you get will at best only be approximate, and will show that it is classical at large enough entangled systems, so that quantum computation will fail for large quantum computers. This is the generic prediction of this point of view, as 't Hooft has said many times.

So while I can't rule out something like what 't Hooft is doing, I can't accept what 't Hooft is doing, because it is sidestepping the only difficult problem—finding the correspondence between probability and QM, if it even exists, because I haven't found it, and I tried several times (although I didn't give up, maybe it'll work tomorrow).

Previous answer

There is an improvement here in one respect over previous papers—the discrete proposals are now on a world-sheet, where the locality arguments using Bell's inequality are impossible to make, because the worldsheet is totally nonlocal in space time. If you want to argue using Bell's inequality, you would have to argue on the worldsheet.

't Hooft's models in general have no problems with Bell's inequality. The reason is the main problem with this approach. All of 't Hooft's models make the completely unjustified assumption that if you can rotate a quantum system into a $0$-$1$ basis where the discrete time evolution is a permutation on the basis elements, then superpositions of these $0$-$1$ basis elements describe states of imperfect knowledge about which $0$-$1$ basis is actually there in the world.

I don't see how he could possibly come to this conclusion, it is completely false. If you don't know which basis you are in, you describe this lack of knowledge by a probability distribution on the initial state, not by probability amplitudes. If you give a probability distribution on a classical variable, you can rotate basis until you are blue in the face, you don't get quantum superpositions out. If you start with all quantum superpositions of a permutation basis, you get quantum mechanics, not because you are reproducing quantum mechanics, but because you are still doing quantum mechanics! The states of "uncertain knowledge" are represented by amplitudes, not by classical probabilities.

The fact that there is a basis where the Hamiltonian is a permutation is completely irrelevant, 't Hooft is putting quantum mechanics in by hand, and saying he is getting it out. It isn't true. This type of thing should be called an "'t Hooft quantum automaton", not a classical automaton.

The main difficulty in reproducing quantum mechanics is that starting with probability, there is no naive change of variables where the diffusion law of probability ever looks like amplitudes. This is not a proof, there might be such effective variables as far as I know, but knowing that there is a basis where the Hamiltonian is simply a permutation doesn't help in constructing such a map, and it doesn't constitute such a map.

These comments are of a general nature. I will try to address the specific issues with the paper.

In this model, 't Hooft is discussing a discrete version of the free-field string equations of motion on the worldsheet, when the worldsheet is in flat space-time. These are simple $1+1$ dimensional free field theories, so they are easy enough to recast in the form 't Hooft likes in his other papers (the evolution equation is for independent right and left movers. The example of 4D fermions 't Hooft did many years ago is more nontrivial).

The first issue is that the world sheet theory requires a conformal symmetry to get rid of the ghosts, a superconformal symmetry when you have fermions. This gives you a redundancy in the formulation. But this redundancy is only for continuous world-sheets, it doesn't work on lattices, since these are not conformally invariant. So you have to check that the 't Hooft beables are giving a ghost-free spectrum, and this is not going to happen unless 't Hooft takes the continuum limit on the world-sheet at least.

Once you take the continuum limit on the worldsheet, even if the space-time is discrete, the universality of continuum limits of 2D theories tells you that it doesn't make much difference—a free scalar which takes discrete values is fluctuating so wildly at short distances that whether the target space values are discrete or continuous is irrelevant, they are effectively continuous anyway. So I don't see much point in saying that leaving the target space discrete is different from usual string theory in continuous space, the string propagation is effectively continuous anyway.

The particular transformation he uses is neither particularly respectful of the world-sheet SUSY or of the space-time SUSY, and given the general problems in the interpretation of this whole program, I think this is all one needs to say.

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user Ron Maimon
answered Jul 27, 2012 by Ron Maimon (7,730 points) [ no revision ]
+1 for the remarks about ghosts, scalars, and SUSY. But you must be wrong to say that 't Hooft wants superpositions of his ontological basis states to play any role (such as representing states of imperfect knowledge). He explicitly says, e.g. in 1112.1811 page 1, that such superpositions do not occur...

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user Mitchell Porter
See prac.us.edu.pl/~ztpce/QM/Bell_beables.pdf page 8 for what I think is 't Hooft's real philosophy. In this paper Bell constructs a hidden variables theory in which there is a "basic local beable" (he uses fermion number density) out of which all space-time objects are constructed. Bell remarks that any observable capable of specifying the positions of objects at mesoscopic resolution could play the role of basic local beable. Your experiment might be measuring spin, but even if your ontology only contains position beables, it will still describe the experiment correctly...

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user Mitchell Porter
... because all the dynamics is in the wavefunction part of the ontology. Similarly, the QFT observables that 't Hooft defines in terms of his ultimate CA beables (in part 6 of 1205.4107 and part 5 of 1207.3612) are the second-order beables out of which the macroscopic world is constructed. The mysterious part is that the CA dynamics is also supposed to produce the right dynamics for measurements of all observables.

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user Mitchell Porter
@MitchellPorter: t'Hooft is saying wrong things, unlike the Bell paper you link. Bell is doing Bohm--- he has a wavefunction plus other variables which are wandering around in response to the wavefunction. The "beables" are those commuting variables which can be Bohmified by giving them definite values at one time, and then letting those values evolve stochastically to reproduce the quantum probability (but you need to know the wavefunction evolution). t'Hooft is finding beables, but he doesn't do Bohm. You can't do that. Bohm has wavefunction, t'Hooft thinks its not there in his theory.

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user Ron Maimon
't Hooft's theory doesn't have a wavefunction at a fundamental level, just the CA. But he's saying that the dynamics of his CA is the same as the dynamics of the beables in a particular Bell-Bohm theory. This Bell-Bohm theory is the one which can be described as a basis permutation in Hilbert space, but 't Hooft is saying that if you look at the eigenvalue dynamics induced by this permutation, it equals the CA.

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user Mitchell Porter
@MitchellPorter: I know what he is saying, it is wrong, wrong wrong. You can't reproduce observed QM from CA the way t'Hooft says. You just get nothing at all--- a probability distribution on CA states, reduced by learning information. You don't get anything quantum in the least. You need to do Bohm theory on the beables t'Hooft finds, which means you need a wavefunction, like in any Bohmian model. t'Hooft is not using a Bohm wavefunction, so he isn't doing Bohmian mechanics, he is doing nonsense.

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user Ron Maimon
+ 6 like - 0 dislike

Some thoughts on this topic.

1) For the newcomer to this topic: There are three 2012 papers by Gerard 't Hooft that you need to read. 1204.4926 maps a quantum oscillator onto a discrete deterministic system that cycles through a finite number of states. 1205.4107 maps an integer-valued cellular automaton (CA) onto a free field theory made out of coupled quantum oscillators. Finally, 1207.3612 adds boolean variables to the CA in order to obtain fermionic fields.

2) I find what Gerard 't Hooft says, about how local CAs might get around Bell's theorem, to be quite unconvincing. The theorem says it's impossible. The "superdeterminism" loophole should require completely unrealistic finetuning of the probability distributions over CA states (the distributions that correspond e.g. to distinct settings of measurement apparatus in an EPR experiment). It's not even clear to me that such finetuning is possible in his setup. The novelty of his constructs, and his particular language of "templates", etc, means that it's not immediately obvious how to bring what he does and says, into correspondence with the established theorems. But at the current rate of engagement with his ideas, I expect that by the end of the month, we should have this aspect sorted out.

3) "The Gravity Dual of the Ising Model" would clearly be important for any attempt to get quantum gravity out of quantum cellular automata. The gravity dual here lives in AdS3, and AdS3 appears to have an unusual universality as far as string theory is concerned. It might be a factor of the near-string geometry in any geometrical background, for example. (I would try to be more precise but I find the literature hard to get into. But here is a short review.) There may be a reformulation of string theory in terms of a quantum CA where the cells are the "string bits". (Lubos Motl's early work ought to be relevant here!)

4) "Clifford quantum cellular automata" are a type of quantum CAs which map onto a classical CA in a way very similar to 't Hooft's mapping - see section II.B.1 of that paper. They ought to be relevant for attempts to understand and generalize the mapping in his 2012 papers, e.g. to the case of interacting fields.

5) 3&4 together offer an alternative to 2. That is, one might hope to get a quantum bulk theory from a classical CA on the boundary, that is equivalent to a quantum CA holographically dual to the bulk theory. Because of the nonlocality of the boundary-to-bulk mapping, it's much less obvious than before (to me, anyway) that you can't get Bell violations in the bulk.

6) Another place where contact might be made with existing research, is via consistent histories. Suppose you defined the histories in terms of the quantum observables whose eigenstates are employed in 't Hooft's oscillator mapping, while also using the same timestep. The CA is then a coarse-graining of the quantum evolution.

7) Finally, I'll put in a plug for my favorite way to get realism from QM, and that would be to treat tensor factors as the "cells". If we denote a two-dimensional Hilbert space as "H", then the state space of a cell (the set of possible states) might be H + H^2 + H^3 + ... If you consider the dynamics available to a CA like this, it's a lot more powerful, and my guess is that the simplest deterministic model of realistic QM would look more like that sort of CA, rather than like a CA with boolean or scalar cell-values.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user Mitchell Porter
answered Aug 14, 2012 by Mitchell Porter (1,950 points) [ no revision ]
+ 6 like - 0 dislike

In another blog I posted the explanation given below; I edited it slightly more. Apologies for the repetitions. Please react.

The idea of my latest paper is simple. I experienced in several blogs now that most people refuse to go with me all the way. I'll give my argument step by step and you may choose where you want to step out. I should add that some of the steps are still conjectural, not all the math has been worked out as clearly as I would like. Most importantly, as was mentioned in the paper, these results are independent of arguments such as Bell's inequalities. Of course I am worried about them, but below I just sketch a train of arguments where I don't see any basic mistake.

But this is the picture I get.

  1. Consider superstring theory, in its original, completely quantized version. Many people believe it might have something to do with the world we live in. It has interesting low energy modes that show some resemblance with what happens in the Standard Model: fundamental fields for particles with soin 0, 1/2 and 1, as well as gravitons for the gravitational field, and gravitinos. The theory is not universally accepted, but it is an interesting model with many features that look like our world. Certainly not obviously wrong, and certainly very much quantum. There is a Hilbert space of states. I only use it as a model to illustrate my ideas. But step out here if you want.

  2. Temporarily, I now have to put the world sheet on a (lightcone) lattice. This is a nuisance, and I quickly want to send this lattice to the continuum limit, but not all math has been straightened out. Step out if you want.

  3. The transverse coordinates of the string form a simple integrable quantum field theory on the string world sheet. This integrable system has left-movers and right-movers, forming quantum states, the string excitations. Now I discovered a unitary transformation that transforms the basis of this Hilbert space into another basis. In QM, we do this all the time, but what is special in the new basis is that it is spanned completely by a set of left-movers and right-movers that are integer valued, in units whose fundamental length is 2 \pi \sqrt(\alpha\prime). Thus, we have operators taking integer values, and they are all commuting. What's more, they commute at all times. The evolution operator here translates the left movers to the left and the right movers to the right. Intuitively, you might find that the result is not so crazy: these integers are of course related to particle occupation numbers in quantum theory. I still have Hilbert space, but it is controlled by integers. If you don't like this result, please step out.

  4. Do something similar to the fermions in the superstring theory. They can be transformed into Boolean variables using a Jordan-Wgner transformation. The superstring theory of course has supersymmetry on the world sheet. That does not disappear, but does become less conspicuous. Also the fermions are tranversal. The Boolean variables also commute at all times. Next stop.

  5. Realize that, if Nature starts in an eigen state of these discrete operators, it will continue to be in such an eigenstate. There is a superselection rule: our world can't hop to another mode of eigen states, let alone go into a superposition of different modes. Thus, if at the beginning of the universe, we were in an eigenstate, we are still in such an eigenstate now. Step out if you want.

  6. I can add string interactions. My favorite one is that strings exchange their legs if they have a target point in common. This is deterministic, so the above still applies. In all fairness, I should add that I have not worked out the math here completely, there are still unclear things here. This is a stop where you may get out.

  7. Rotations and Lorentz transformations. To understand these, we need to know the longitudinal coordinates. The original, completely quantized superstring tells you what to do: the longitudinal coordinates are fixed by solving the gauge constraints (both for the coordinates and the fermions) . The superstring has only real number operators, of course non-commuting. This step tells us that only 10 dimensions work, and fixes the intercept a. Don't like it? Please step out.

  8. What I have here is a Lorentz invariant theory equivalent to the model generated by the original superstring theory, but acting like a cellular automaton. It IS a cellular automaton. Any passengers left?

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user G. 't Hooft
answered Aug 14, 2012 by gthooft (919 points) [ no revision ]
I have explained which stop I get out on--- the stop is the place where you assume that a Beable basis means full-on QM is equivalent to classical CA without further work: see physics.stackexchange.com/questions/34165/… . Regarding point 6, this is not the proper way to add string interactions outside of light-cone, but now you are saying light cone, so it might or might not be right, I don't know. This can be checked with Mandelstam and Kaku and Kikkawa's papers.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user Ron Maimon
@Ron: But you might consider staying on. Once you agree that the beable basis exists (or might exist), just continue doing QM there. Observe however, that you can do the same with any totally classical system such as the planets obeying Newton's laws. Their evolution law (at integer time steps) is also a permutator. You may pause at the question how the "Earth-Mars exchange operator" evolves with time, and conclude that you can understand the physics of the system without solving the problem, but you might also add that operator to your set of observables. It's the same planets you talk about.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user G. 't Hooft
I agree that the Earth Mars permutation is non-commuting, and that it acts on probability states, but I disagree that you can prepare eigenstates of this operator within the probability space, as they have both positive and negative values. This is why I consider small perturbations to a steady state, so that positive and negative values are allowed equally. You do not do this, so your formal Hilbert space is not properly embedded in a probability space. Once you do this, I am on board completely, except without any certainty that it is physically correct.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user Ron Maimon
@Ron: No, according to my rules, which are exactly as in QM, perturbations don't have to be small, not $\delta\rho$ but $\psi$ is the wave function. Its sign can be positive or negative, and its absolute square is the probability. Earth-Mars interchange acts on $\psi$, not $\rho$.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user G. 't Hooft
+ 1 like - 0 dislike

In response to the downvotes: 1) Please note that all of the conversation above has been using no mathematics... this is turning into a philosophical argument and should be closed as such. 2) Explain the reasoning for these downvotes. This thread seems to be becoming very biased.


It has as much plausibility as the other papers, because it depends on the other papers.
There are two big problems I see:

1) There are left-movers and right-movers, and there is a lattice cut-off. The cut-off does not affect the particle dispersion law: all modes with momentum below the Brillouin zone move exactly with the (worldsheet) speed of light. There is no direct interaction yet. We did not (yet) consider boundary conditions, so the string has infinite length. Thus, apart from the lattice cut-off in the world sheet, this is a quantum string. After the transformation described in Ref. [9], the space-time lattice disappears and now seems to look like a continuum.
-- This was an excerpt from the paper. It talks about "strings with infinite length" and ignores the lattice cut-off to describe the string. Where is the mathematics behind all of this?

2) The philosophy used here is often attacked by using Bell’s inequalities[1]—[3] applied to some Gedanken experiment, or some similar “quantum” arguments. In this paper we will not attempt to counter those...
-- The paper does not try to answer the problem with Bell's inequalities AT ALL. The point of the paper is to use the math to say something (i.e. interpretation) about String Theory, but such an interpretation seemingly goes against the Bell's inequalities.

Anyway, the paper tries to make String Theory resemble a discrete system of "bits of data", the 'resemblance' being made by mathematics, and then studying the discrete system to try and say something about the classical-versus-quantum interpretation of String Theory. It is meta-physics at this point.

(I clarify that this is just my thought after reading the paper, which is all that the question is asking for... even though I could be misunderstanding everything and this work turns into a Nobel Prize)

This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user Chris Gerig
answered Jul 24, 2012 by Chris Gerig (590 points) [ no revision ]
+ 0 like - 0 dislike

I think that physicists will generally ignore Prof. 't Hooft's research on CA superstring determinism until at least one dramatic, new testable prediction arises. I have suggested that the -1/2 in the standard form of Einstein's field equations should be replaced by -1/2 + dark-matter-compensation-constant. My guess is that CA superstring determinism is highly compatible with this new dark matter approach. If not, then CA research needs to do something else dramatic, such as give a testable explanation of the space roar or the GZK paradox.

This post imported from StackExchange Physics at 2014-04-01 13:15 (UCT), posted by SE-user David Brown
answered Dec 27, 2012 by David Brown (0 points) [ no revision ]

Your answer

Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
Please consult the FAQ for as to how to format your post.
This is the answer box; if you want to write a comment instead, please use the 'add comment' button.
Live preview (may slow down editor)   Preview
Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:
p$\hbar$ysic$\varnothing$Overflow
Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).
Please complete the anti-spam verification




user contributions licensed under cc by-sa 3.0 with attribution required

Your rights
...