An idea I have long had and might eventually start working on (actually, I just did) is based on the bad taste that «quantisation» à la Kostant, Vergne, and Souriau left in my mouth long ago. Now it used to be said that first quantisation is not even defined (So let's toss out Kostant and Souriau) but second quantisation is a functor. But there are foundational reasons for thinking that QFT is fundamentally wrong: just another useful asymptotic approximation like the Law of Large Numbers.
The first reason is that quantum measurement suggests that the axioms about observables are mere approximations. (So we can toss out Irving Segal too.) Furthermore, no satisfactory relativistic theory of measurement has ever been accepted. QFT avoids the whole issue, but then if observables are not fundamentally physical, why should algebras of observables be any better? Why should operator-valued fields be any better? So I no longer worry about renormalisation or QFT: a useful approximation can have divergences when one tries to apply it to some situation outside the range of validity of the approximation, without that amounting to a foundational crisis (This is what people in Stat Mech learned long ago, in fact it is practically a quote from Sir James Jeans's poo-pooing the whole H-theorem controversy). Which rather undermines the main motivation for string theory, too...
The second reason is that a quantum field, like a classical field, assumes there can be an infinite number of harmonic oscillators. But we've weighed the universe so there is a top energy level. And the effective universe is finite in size, so Planck's Law suggests there is also a minimum energy level. So there are only a finite number of harmonic oscillators in the Universe, that number is bounded (for a given time-slice), and so every Hilbert space is finite dimensional and every spectrum is discrete, just like my physics teacher told us all long ago. («Now remember, every particle is a particle in a box.») So we can toss out Reed and Simon too. (There might be something wrong about my using Planck's law and a finite effective size of the Universe...)
You say, but there are no finite dimensional irreducible unitary representations of the Lorentz group with dimension bigger than 1. But Gen Rel makes that less important, does it not?
Therefore the arguments that go back to Bohr and Rosenfeld about using non-quantised gravity to probe quantum systems is not so decisive: it is a proof by contradiction, but if their use of observables and measurement axioms can only be considered approximate, there is no longer anything decisive about their contradiction.
Non-commutative geometry rests on that whole Dixmier--Souriau thing, so toss Alain Connes, too.
Fifty years from now, all this Quantum Gravity thing will look like the luminiferous ether looks to us today.
The real obstacles to reconciling Quantum Theory with Gen Rel are bad enough without imagining phony obstacles. Bell sensed it and worried about it: Quantum Mechanics lives on phase space, but Relativity of any kind (special or general) lives on configuration space, i.e. space--time. (four dimensions, not 2^256...) (That was one advantage of QFT: it returned to actual space--time...) I feel that even so, the most promising approach is still to take Quantum Theory and make it generally covariant (and this might not involve anything much worse than Yang-Mills theory), rather than start with Gen Rel and «quantise» it. But even if one could overcome these difficulties, there does not seem to be any practical way to experimentally confirm such a theory without going into cosmology, where the observed facts are hardly as precisely established as the advance in the perihelion of Mercury was....
I would be curious to know what Alan Guth would have to say about this.
Assume that the Universe was one spherically symmetric particle in its ground state...
This post imported from StackExchange Physics at 2014-03-22 17:24 (UCT), posted by SE-user joseph f. johnson