Quantcast
  • Register
PhysicsOverflow is a next-generation academic platform for physicists and astronomers, including a community peer review system and a postgraduate-level discussion forum analogous to MathOverflow.

Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion.

Please help promote PhysicsOverflow ads elsewhere if you like it.

News

PO is now at the Physics Department of Bielefeld University!

New printer friendly PO pages!

Migration to Bielefeld University was successful!

Please vote for this year's PhysicsOverflow ads!

Please do help out in categorising submissions. Submit a paper to PhysicsOverflow!

... see more

Tools for paper authors

Submit paper
Claim Paper Authorship

Tools for SE users

Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post

Users whose questions have been imported from Physics Stack Exchange, Theoretical Physics Stack Exchange, or any other Stack Exchange site are kindly requested to reclaim their account and not to register as a new user.

Public \(\beta\) tools

Report a bug with a feature
Request a new functionality
404 page design
Send feedback

Attributions

(propose a free ad)

Site Statistics

205 submissions , 163 unreviewed
5,082 questions , 2,232 unanswered
5,353 answers , 22,789 comments
1,470 users with positive rep
820 active unimported users
More ...

  Why does the divergence of perturbation theory in interacting QFT imply its Hilbert space to be non-Fock?

+ 5 like - 0 dislike
10890 views

Arnold remarked in a comment under his answer to Ron's question that

The fact that naive perturbation theory produces infinite corrections, no matter which finite counterterms are used, proves that no Fock space works.

I would like to understand how to infer the conclusion "no Fock space works" from "The fact that naive perturbation theory produces infinite corrections, no matter which finite counterterms are used".

asked Sep 17, 2014 in Theoretical Physics by Jia Yiyang (2,640 points) [ no revision ]

1 Answer

+ 3 like - 0 dislike

[In view of the discussion let me mention that the context is that of Poincare-covariant quantum field field theories . It is clear that giving up covariance makes many things possible that are not possible otherwise, and allow to make rogorous sense of renormalization in simpler situations such as for free covariant fields interacting with classical external fields, or for the Lee model.]

The fact that naive perturbation theory produces infinite corrections, no matter which finite counterterms are used, proves that no Fock space supports an interacting quantum field theory. Hence there is no Hilbert space featuring at every time physical particles.

Here is my non-rigorous proof:

If there were a Fock space defining the interacting theory at every coupling parameter $g$, it would represent the particles by annihilation fields $a_g(x)$ corresponding to some mass $m(g)$. Taking the limit $g\to 0$ (assuming it exists) we see that the Fock spaces at $g$ have the same structure as the limiting Fock space. By continuity, only continuous labels of Poincare representations can change; the other will be fixed for small enough $g$. The only continuous label is the mass, so the Fock spaces differ only by the mass $m_g$. All other structure is rigid and hence preserved. In particular, if we assume the existence of a power series in $g$, the fields are given by 

\[\Phi_g(x)=\frac12(a_g^*(x)+a_g(x))+O(g).\]

Now consider the operator field equations at coupling $g$. For simplicity take $\Phi^4$ theory, where they take (the limit term guarantees a correct the form

\[ \nabla^2 \Phi_g(x)+ m(g)^2 \Phi_g(x)  + g \lim_{\epsilon\to 0} \Phi_g(x+\epsilon u)\Phi_g(x)\Phi_g(x-\epsilon u):=0.\]

(This is called a Yang-Feldman equation.) Multiplying with the negative propagator $(\nabla^2 +m_g^2)^{-1}$, we find a fixed point equation for $\Phi_g(x)$, which can be expanded into powers of $g$, and all coefficients will be finite because the $\Phi_g(x)$ and hence their Taylor coefficients are (after smearing) well-defined operators on the corresponding Fock space. Going to the Fourier domain and taking vacuum expectation values, one find a perturbative expansion with finite coefficients which is essentially the textbook expansion of vacuum expectation values corresponding to perturbation around the solution with the correct mass.

This proof is not rigorous, but suggestive, surely at the same level as Ron's arguments. I don't know whether it can be made rigorous by more careful arguments. But it only makes the sort of assumptions physicists always make when solving their problems. If these assumptions are not satisfied but an interacting theory is still Fock, it at least means that there is no mathematically natural way of constructing the operators, even perturbatively. Thus for practical purposes, i.e., to use it for physical computations, it amounts to the same thing as if the Fock representation didn't exist.

answered Sep 17, 2014 by Arnold Neumaier (15,787 points) [ revision history ]
edited Sep 30, 2014 by Arnold Neumaier

Thanks and +1, a few questions:

1) So the proof isn't quite related to Haag's theorem, is it?

2)You define the interaction term as

$g \lim_{\epsilon\to 0} \Phi_g(x+\epsilon u)\Phi_g(x)\Phi_g(x-\epsilon u)$,

is there any condition needed to guarantee such limit exists? And could the divergence we encounter result from the fact we normally use more careless version of it $\Phi^3_g(x)$(in this case your proof maybe flawed even at physicists' level of rigor)?

3)If what you said is true, in what possible sense/way can an asymptotic Fock space emerge?

It is a version of Haag's theorem, as it has the same conclusion. I don't remember how Haag's theorem was originally proved and which alternative proofs exist. (Note that Haag's theorem is very old, one of the oldest results in algebraic QFT. It is just a precise formulation that explains why naive perturbation theory has to fail.)

The correct regularization of the field term (I forgot to mention the condition $u^2<0$) follows the actual construction in dimension $3$ rather than $4$, where the limit exists, but I expect it to remain valid in 4D, where lacking a construction it cannot be known if the limit exists.

The divergence indeed results from using in its place $\Phi(x)^3$, which is an ill-defined product of distributions. It is also this limit that moves you out of the Fock space; working with a fixed $u$ and a compact position space there are no divergences.

At physicist's level of rigor, this regularization is equivalent to regularization by any of the more common methods. (In simple cases almost any regularization method works and gives identical results.) So my proof is a little more rigorous than when you already begin with a flawed starting point. 

Asymptotic Fock spaces emerge for a different reason - in the presence of a mass gap one can construct asymptotically conserved currents for each bound state. This is thecontent of Haag-Ruelle theory. When there is no mass gap (such as in QED and the standard model), even the asymptotic spaces are no longer Fock spaces, since all asymptotic particles become so-called infraparticles. This terminology derives from the associated infrared divergences that appear when one ignores this fact. The correct asymptotic theory must then be formulated in terms of coherent states.

Actually, there are mathematically sound interacting quantum field theories, even in 3+1 dimensions. They may not be the most fundamental theories, however they are good starting points to understand renormalization theory on a rigorous level. Not to mention all the perfectly defined interacting theories with cut offs! Haag's theorem simply states that there are inequivalent representations of the CCR for operators that satisfy the axioms of QFT, e.g. the Wightman axioms. This means one has to choose one specific representation of the CCR. Nevertheless within a representation it is possible to define both free and interacting theories. As an example, see the following rigorous renormalized QFTs:

http://projecteuclid.org/euclid.cmp/1103842445

http://scitation.aip.org/content/aip/journal/jmp/5/9/10.1063/1.1704225

@yuggib: Both examples are no good. The first paper deals with quadratic interactions, i.e., free field theories, and we understand those already without the extra gobbledygook introduced in this paper (each rigorous paper has its own different gobbledygook for writing down quadratic path integrals. Quadratic path integrals require no gobbledygook to make sense). The second is a nonrelativistic 3+1 theory, where there are no backward in time diagrams, so no vacuum renormalization (no change in the vaccum under the interaction, no Haag theorem), and the authors use a cutoff anyway to regulate the theory.

Arnold's answer concerns regarding real interacting theories with no cutoff, and the simplest examples here are not even close to the stuff the rigorous people you are citing. The rigorous literature here is, as usual, obscuring very simple ideas by burying it in author-specific useless nonsense which only gives a superficial impression of extra rigor. The only recent exception to this rule is Hairer.

@RonMaimon Before doing such pretentious statements you should have to understand what you read. For example in the second article the theory is renormalized, and the cutoff is removed. It also describes the interaction between particles, that are not relativistic, and a relativistic field. This is actually a very simple example of a realistic QFT, and has a lot of applications (e.g. crystal phonons, quantum optics, polarons...). They may not be as fundamental theories as you would like, but nevertheless very relevant. The first source deals with quadratic interactions that may be easy, but require a change of Hilbert space to be understood, at dimension 4 or higher.

Saying "no Fock space supports an interacting quantum field theory" is, at least, imprecise. And if you give a non-rigorous proof to a non-rigorous statement that is probably false, you are not doing anything interesting at all. Treating an interacting quantum field theory, even if simplified, with rigour and constructing a self-adjoint hamiltonian and thus a well-defined dynamics is surely more interesting. At least by my point of view.

Maybe you do not like rigour because you do not understand it? Well, I hope at least you understand all the mathematical subtleties that are hidden by your precious path integral, that is so straightforward in your opinion...

I understand rigorous stuff and I have no problem with good rigor. The first paper is an examples of bad rigor, meaning a rigorous formulation that avoids the path integral in the case of a quadratic path integral. There is no problem formulating any well-defined quadratic path integral rigorously, there are no real mathematical subtleties. The subtleties come for nonquadratic path integrals.

The second paper is an example of something else, which is Lee model renormalization, where the renormalization can be done perturbatively exactly in closed form. Both are misleading toy problems for the problem one is interested in in relativistic theories, and the focus on such things, which avoid the difficult problem, makes the rigorous literature a complete waste of time, with Hairer being the exception that proves the rule.

I know the theory in the second paper is renormalized, it is a version of the Lee model. I am saying it does not have vacuum polarization because the nonrelativistic field doesn't go back in time, so there is no problem with divergent vacuum bubbles. The relativistic field, aside from its interactions with the nonrelativistic field, is free. Lee models are useful, because their renormalization can be done in closed form (that's why Lee studied them), but the renormalization suffers from a problem in relaxing the regulator, which is why the second paper keeps the regulator large and finite.

Nobody said "No Fock space supports an interacting field theory", this is a false statement. All interacting field theories with a mass gap have a Fock in and out space. The statement that Arnold is showing is that the Fock spaces for the free and interacting theories are not unitarily related. He is doing it in a way that does not discuss vacuum bubbles, but rather the interaction term.

What you say is true, but not relevant for this discussion, which is about theories where the particles are interacting and all the particles can go back in time. The problem with pretension is entirely on the side of the rigorous mathematical work. The lack of understanding is of the path integral, and avoiding the path integral for these problems is like the ancient practice of avoiding Archimedes style infinitesimal summations when computing areas. While it is tricky to make the infinitesimal calculus rigorous, it was done by Weierstrauss and Cauchy and others in the 19th century, and this is what finally resolved the ancient issues, not sidestepping the problem using rigorous methods that work for special cases. Although using special techniques is necessary at times, because rigor is important, in this particular case, there is nothing gained. I should admit that since the second article is paywalled, I am reconstructing what they did from the abstract.

What you say of path integrals is not completely true. It is far from a well understood mathematical tool. There is nothing rigorous in the definition of the path integral in real time. In euclidean time, there are rigorous formulations of the path integral, both for particles and fields, but require heavy tools of probability and stochastic integration. Then, rotating back to real time is a rigorous practice only in very few situations. So there are indeed a lot of problems in the definition of path integrals, even quadratic ones. Nevertheless they are very interesting, also mathematically. The definitions adopted by theoretical physicists are usually quite sloppy, and often unacceptable on a mathematical standpoint.

I know that theoretical physicists does not care about rigour, almost at all. And that does not diminish the value of their works, that often provide very precious intuitions. But this does not make rigour neither useless nor bad practice. It allows to understand things a lot more deeply, in my opinion. The problem is that rigorous results are very difficult to obtain. I understand these results may not be so interesting for you, but your opinion on them is too biased, in my opinion. 

Perturbative renormalization is a lot less satisfactory than the building of a complete renormalized dynamics for the system, even from a physical standpoint. But to do a complete theory, you need to use the proper precision, and mathematical rigour. And the "No Fock space supports an interacting field theory" is just a quote of Arnold's answer.

My opinion may be biased against this, but I insist that it is accurate.

I agree that constructing general quantum field theories rigorously is the central mathematical problem of our time. But the mathematicians most often have no idea what they are constructing, and construct nonsense trivialities instead, because this is what they understand.

The path integral is always defined via an algorithm to do stochastic sampling, and then you do monte-carlo to find the correlation functions. I will explain why this is difficult to embed into rigorous literature below. There are two difficulties, a technical difficulty, and a formalism difficulty.

First, informally, quadratic path integrals are trivial to define (in Euclidean space): pick a Gaussian random variable for each Fourier mode, and Fourier transform the random numbers into a distribution. That's it, that's the definition of Gaussian path integration.

This is the complete algorithm to pick from the measure, I run it on my computer, it always defines a distribution in the limit of infinite volume and there are no problems in estimates that this always produces a convergent thing.

Unfortunately, this trivial statement of algorithm and convergence is disallowed as formal mathematics, at least not without adding nonsense. Why? Because you aren't allowed to speak about a randomly picked thing, like a randomly picked real number between 0 and 1, within standard set theory, because the set-membership properties are paradoxical (in usual formulations of set theory there are non-measurable sets, so that you can't assign a probability of the random real landing in certain sets).

That's entirely the fault of mathematicians. They chose a stupid set-theory convention back in the 1920s, and stuck to it, despite nearly 100 years of criticism. In 1972, Solovay showe that you can alternately choose a model of set theory where all sets are measurable, i.e. where you can say "pick a real number r uniformly at random between 0 and 1" and there is no contradiction.

The path from "choose a uniform real number" to "choose a distribution from a gaussian path-integral" is trivial.

Since even this trivial probabilistic algorithm is taboo, what do you do instead? You talk about something else! Like Hilbert spaces, Fock spaces, etc. Anything except what physicists are actually computing, which is a monte-carlo integral defined on an infinite domain. The algorithms for picking stuff don't fail on an infinite domain, the limiting estimates for infinite volume are not difficult. It's just that there is no standardized formalism for speaking about probability on such infinite domains.

But fortunately, there is a clever way around this formal limitations, exploited by Hairer. In stochastic PDE's, the concept of Wiener white-noise (d-dimensional white-noise is called "Wiener chaos" in this literature) is accepted and declared "rigorous" already, even when the white-noise is defined on an infinite volume. Further, there is a well-developed collection of results for deterministic transformations of Wiener chaos, in the SPDE literature.

But that's already enough to define the algorithm for picking from the general stochastic path integral. Why? Because you can reconstruct the algorithm that picks a distribution from a path integral as a stochastic differential equation whose only stochastic input is white noise. Then you can take the renormalization limit by mollifying the noise, and take the long stochastic time limit to produce a pick from the path integral distribution, and this picks a distribution from the path integral, and nobody can complain about lack of rigor anymore! That's exactly what Hairer did in his "regularity structures" paper. It is a complete and rigorous construction of a large class of path-integrals (superrenormalizable theories) for the first time within a unified framework.

The inability to speak about probability is the formal difficulty. The technical difficulty (which is the only real mathematical difficulty in the problem) is, in Hairer's stochastic quantization framework, to define the renormalization procedure that bounds the regularized solutions to the SPDE, and show that there is a limit in distribution for both short-times and long-times. This is also difficult, but Hairer did it in a straightforward way for superrenormalizable theories, and we already know that it should be possible for all theories which make sense nonperturbatively.

All this indirection about path-integration is simply due to the lack of formalism for speaking about probability on infinite spaces, because measure theory is buried in annoying conventions from the bad old days when people were worried that probability would not be universal, so that the concept of "random real number" might fail to make sense. Now that we know it makes sense, it is possible to radically simplify measure theory by throwing away the parts that restrict the set-operations you are allowed to perform on measurable sets.

Infinite dimensional probability is exactly what the path integral is talking about. It is a well defined concept because I can do it on a computer, the large-volume limits produce no problems except as far as sigma-algebras are concerned, and the convergence to the continuum is entirely equivalent as Wilson showed (the continuum limit is the second order transition in the infinite volume lattice theory, or regulated theory).

You can see the convergence with your own eyes, and you can demonstrate the convergence using probabilistic estimates, and anytime such a construction is nonrigorous, it is mostly because mathematicians made their lives so difficult regarding probability so many years ago.

I should add that the analytic continuation to define the field theory has been understood completely rigorously since the 1960s, and is covered well in Streater and Whitman in the reconstruction theorem.

Regarding "no Fock space supports an interacting theory", Arnold understands in and out Fock spaces, he meant "no free-field Fock space supports an interacting theory" and this is true for interactions in relativistic theories, the nontrivial kind, the kind that shift operator dimensions.

The work of Hairer is interesting, but the results concerning QFT were already obtained in the seventies by Glimm and Jaffe and others, as he says in his paper. His point of view may be a good direction, but it is just one possible direction. You propose your path integral view as an absolute truth, stubbornly opposed by stupid mathematicians (except Hairer). I don't like the supposed absolute truths, sorry. 

Also the measurability of all sets is possible only if you do not have the axiom of choice, and is equivalent on  the existence of a very large cardinal, if I remember correctly. Obviously if computability is the foundation of your beliefs, there is no way of arguing with you in a rigorous way on the problems of not having the axiom of choice on a lot of basic mathematical structures, commonly used by also physicists.

Anyways, it is always interesting to see different point of views, even the absolutistic ones like yours ;-)

I am an absolutist and you should be too. If someone gives a rigorous construction of a field theory without simultaneously defining the corresponding path integral, it is negative progress, better they had done nothing. The path integral is what you are simulating, it's how you think about it, and it's how you prove things about it. Everything else is second-rate hooey. It's like all the stuff people did in areas and volumes before Cavalieri and Leibnitz, it's creating the illusion of understanding in order to bury a method that is clearer, but superficially harder to make precise. Of course, in the case of calculus we know it's much easier to do integrals than prove exhaustion bounds on approximations.

Not all of Hairer's results are reproductions of Glimm and Jaffe. In particular, he can deal with theories with no vacuum state, like inverted $\phi^4$ without local difficulty, and he can deal with nonlocal theories, and his construction is generalizable without regard to theory or regulator. The construction also looks generalizable to the critical case of interest.

Regarding "axiom of choice", and "large cardinal", and all this nonsense, the way in which the measurable universe is proved consistent is by showing that you don't get a contradiction when you add randomly chosen reals relative to the old model you started with (this is Solovay's "random forcing"). The entire set-theoretic model is countable, as usual in logic, and there is no contradiction with intuition, or with the choice theorems you like. The axiom of choice for the continuum is not used in physics, it isn't even used in mathematics, except for producing paradoxes. The only reason you see a "large cardinal" is because it is a relative consistency proof, which requires going up the Godel-chain of theories a little bit to establish. The large cardinal involved asserts that there exists a set which models ZFC, and it can't be controversial at all. But I prefer to be more absolutist here too, and say that uncountable sets like the reals should not even be considered sets, but proper classes with a measurability axiom. That way people won't get the urge to well-order them and start doing nonsense like this.

The only point of measurable universes is to be able to speak of naive probabilistic constructions using random values without worrying about whether they make sense, since when all sets are measurable, all set theoretic constructions which can be applied to real values with assured convergence can be applied without modification to random values, as long as the convergence is sure with probability one.

This includes taking a functional limit approximations to a distribution. If you can show it is true with probability 1 that you converge to a distribution, you are finished with the construction. Contrast this with the ridiculous nonsense in rigorous field theory texts for constructing the sigma-algebra for the measure in this case.

You know, physics and mathematics are all about proving things and making effective predictions. 

If you have understood everything perfectly, and that is absolute truth, opposed to ridiculous nonsense, I expect you are able to prove a lot of interesting results using your point of view...

Good luck with that! I am eager to see new groundbreaking developments.

Ah, and also: "the axiom of choice for the continuum" may not be used in physics, but rest assured that the Hahn-Banach theorem is quite central. Good luck in defining e.g. distributions (with interesting properties) without it. In ZF+Hahn-Banach there are non-measurable sets. This again leads to the previous point: what are you able to prove and predict in the Solovej model, or when the uncountable collections are proper classes?

It is easy to judge the work done by others, and classify it as junk or unimportant or nonsense. It is also not worth it. What I suggest to you is to provide new interesting contributions, and prove the others wrong or go beyond what they can do. If else, everything you say is just a matter of your personal taste, and stubborn opinions.

Generalize the theory of Hairer to the critical case, and to a real time field theory. Then I will be the first to advertise your results!

In my personal opinion there is absolutely nothing wrong with people attacking for example different QFTs from a more mathematical/formal/rigorous point of view, it is NOT all nonsense or junk. Even investigating from a physics point of view not quite realistic simplified or toy models is often useful, physicists do this too when for example studying SYM theories with a high level of SUSY ...

Also, we should not forget that mathematicians interested in theoretical physics are welcome on PO too as said for example here, and their work is on topic ...

@Dilaton: What's this about? Of course rigorous work is welcome.

I am describing the central limitations of approaches that don't do path integration. The best rigorous work defines path integration, but there is nice rigorous work in other directions too.

But here you have demonstrably wrong claims--- namely that two specific rigorous papers are refuting the claim that interacting field theories can't be unitarily related to free ones. These papers do nothing of the sort, although the rigorous language can make a barrier to seeing this.

Further, there is a second claim that path integration is ill-defined, due to the discomfort with limiting measure, i.e. randomness with infinitely many random choices. It is important to explain that this is completely fine, that there is no paradox at all in the concept of infinitely many random picks, as was established by Solovay and others.

These discussions are not meant to exclude any point of view.

The Hahn Banach theorem for the cases of interest to physics does not require continuum choice, neither does distribution theory. This is stupid propaganda from mathematicians. The Hahn-Banach theorem that is used in all the physics cases (and in all the useful mathematics cases) are for separable Hilbert spaces, and this version only uses countable or dependent choice, not choice on the continuum. Choice on the continuum is a monstrosity, it is never used.

People have been saying the same wrong things for nearly 100 years, they need to be knocked out of the delerium. The theorems will come, but people have to get used to the fact that probability makes sense first. Then the problems will disappear the same way they did in calculus, by people getting comfortable that there are no paradoxes.

The Hahn-Banach theorem does not require at all a scalar product structure, and serves a lot more purposes than separable Hilbert spaces in functional analysis. It is evident you are not well versed in this stuff. And it does not require the axiom of choice as I said, but nevertheless if you consider ZF+Hahn-Banach you have non-measurable sets. But you are extremely blind on what you do not want to see.

I am amazed we have conspiration theorists even on mathematics/physics subjects. Well, not so amazed actually...good luck with the production of new fantastic results that apply your point of view.

And please, do not say there are "demonstrably wrong claims" if you are not capable of giving a (mathematical) proof. And if you are capable to do so, provide it (but precise, not with opinionated blah_blahs). If you think you cannot or "do not have time" to do so, respect the position of others and do not make such claims. You know, scientific credibility is not earned shouting louder on a physics blog. That's why I will not continue this arguing further.

And to be completely clear: my first claim, related to the assertions in Arnold's answer, was that there are well-defined interacting theories in Fock spaces, at least in some (simple) model; opposed to the statement (I'm citing) "no Fock space supports an interacting quantum field theory". Maybe that was not what he intended, but if you make a (scientific) statement, it will be read for what it is, not for what is supposedly omitted.

The second claim I made was that there are subtleties in defining path integrals in a rigorous way, that involve the knowledge of stochastic integration and so on. Not that it is not possible.

@yuggib: On the contrary, it is clear that it is you who is not well versed in AC considerations, or rather, that you understood it by the social process of listening to other people. The Hahn Banach theorem, in cases where it is applied, never is using anything more than countable or dependent choice, (the easiest way to see this is because it is possible to translate any reasonable effective construction to a countable model of set theory without losing anything).

I don't give a hoot about "scientific credibility". My target audience is myself twenty years ago, to make sure the nonsense ends. It is important to shout loud on internet forums, because you are repeating the same tired dogma that people repeat a million times, it is passed as obnoxious hearsay from mathematical physicists to mathematical physicist, and I only tell you stuff that is produced independently though thinking. That's why it sounds all wrong to your ears. You just need to clean out your ears.

Here is an extremely simple rigorous result (clunkily proved, and not at all difficult) which uses this point of view: http://mathoverflow.net/questions/49351/does-the-fact-that-this-vector-space-is-not-isomorphic-to-its-double-dual-requir (my answer, it uses this point of view throughout)

Because it is using the measurable universe, it is proving what a typical mathematician would call a "relative consistency result". But of course from my point of view, where it is simply axiomatically true that every subset of R is measurable, it is proving that the double dual of the space of finite sequences of real numbers is itself. Similar constructions can be used to show other similar results.

I could redo a standard measure theory book from the point of view of a Solovay universe. The sticking point for me was filtrations, but I learned how to translate those properly to the "everything is measurable" perspective a few days ago. There's a nice generalization there for field theory if you are interested.

Look, I am not repeating a tired dogma; just stating that if you change the axioms, you have to prove everything by scratch (and I expect it is not possible to prove everything, since the theories are inequivalent). And I would like to see it done before adopting the existence of an inaccessible cardinal as the new tired dogma. I am not saying I don't believe it is possible, I am saying that I want proofs that everything mathematically important (say e.g. for the description of the physical world) is still there in the different axiomatic system. 

Just a precisation:

In the proof you cited, it needs to be clarified if you use the ultrafilter lemma:

"Therefore the nonzeroing sets form a nonprincipal filter extending the finite-complement filter.Using dependent choice, either you have an infinite sequence of disjoint restrictions of v S1, S2, S3, ... which are nonzeroing, or the restriction onto one of the sets makes an ultrafilter."

It is not completely clear to me: how do you pass from the non-principal filter to the ultrafilter? are you using the fact that every filter is a subset of an ultrafilter? You cannot use the ultrafilter lemma because it implies the existence of non-measurable sets. However there is another proof that does not use ultrafilters below yours, so I assume the result holds without the ultrafilter lemma.

A final remark/question:

The space of real numbers has a very rich structure. But how do you characterize the (topological) dual of spaces with a lot less structure, without Hahn-Banach? No one assures you that the dual is not trivial. Take the space of rapid decrease functions. This is a Fréchet space, with a family of seminorms. Its topological dual is very important in physical applications, because is the space of distributions $\mathscr{S}'$. Now, are you sure that it is possible to prove that $\mathscr{S}'$ separates points on $\mathscr{S}$ without Hahn-Banach (or ultrafilter lemma, i.e. in your theory where every set is measurable)?

I don't know, maybe it is possible. Maybe you do not have sufficient structure to do that in a constructive way, and without ultrafilters. But you see, there is a lot of stuff that has to be proved from scratch, and the more it lies on the boundary of what the Hahn-Banach is really necessary for, the more it will be difficult to prove the result by others means; and at a certain point it would be impossible (because the theories are inequivalent). It seems to me that you have a very long and difficult road ahead to complete the program, and even if you are (over)confident it is possible, it cannot be taken for granted. So I'm not a believer of old mathematics, I am a skeptic as any scientist is. But I agree with the proofs (the ones I know) given using using ZFC; I wait the reproduction of (the majority of) the known results using ZF+dependable choice+inaccessible cardinal to see if I agree with them also.

Regarding the ultrafilter question, I am not using the ultrafilter lemma, I am using it's explicit negation in this particular case. I gave a procedure to pass from one nonzeroing set to a nonzeroing subset which is a sequential procedure, so it is only using dependent choice, and which has to continue forever because the only way it can terminate is if one of the subsets define an ultrafilter, and that can't happen, because there is no ultrafilter. This part I hacked together to finish the proof after the earlier simpler methods didn't completely work to exhaust the possibilities, and this is why Blass's proof is ultimately cleaner. Someone pointed out to me in the comments that it is easy to prove "no ultrafilter" from "everything measurable", so you can certainly inline this proof into my proof and make the argument cleaner, but Blass's argument is probably cleaner still, so I lost the motivation to do it.

Contrary to one's immediate intuition, every old result is preserved in the new way of thinking, it just gets a slight reinterpretation. The way you do this is by embedding an L submodel in the Solovay universe, where L is the Godel constructible universe. L is the simplest logical model of ZF starting from the ordinals required for ZF and the empty set, and it is also a model of ZFC. Every ZF set theory universe has an L-model sitting inside it which obeys usual AC. In this L-submodel, every ordinary theorem holds, there are non-measurable sets, the Hahn-Banach theorem is true, etc. So all the old theorems become L-theorems, all the objects you construct in this way you label with a subscript "L". When you embed the submodel in the Solovay universe, there are things that are absolute, meaning they don't change at all, and these include the integers, all countable constructions. There are other things that lift to sensible things in the full universe, for example all the measurable sets lift to sensible measurable sets in the new universe with the same measure, but they get extended with new points. But by contrast, all the nonmeasurable sets in L lift to measure zero dust sets in the full universe.

The intuition is just that the L-universe is countable and when you make R measurable, R is getting forced to be bigger than the whole previous universe, i.e. a randomly chosen real number is always outside of L, and an infinite random sequence of numbers can be used to model L. This is also the intuition for why inaccessibles are required for the relative consistency--- when you extend the model to make every subset of R measurable, the new measurable R is strong enough to model the whole old universe as a set, and then do set theoretic operations on this.

Yes, I agree that this approach does require rethinking, and it is a considerable amount of work, but just because people have avoided this work for 50 years doesn't make it less necessary. Why would you force a physicist (like me) to sit down and learn obscure and jargonny logic literature just in order to make sense of the simplest path integral constructions? Even though I like it now, it's not something I enjoyed at first.

There is a great simplification gained when every subset is measurable--- any operation you can do on real numbers you can automatically do on random variables, simply because every set operation produces a still-measurable set. This means that the random variables are no longer second-class citizens in the real-number system, and any deterministic construction can be lifted to a probabilistic construction without a separate analysis of whether the operations involved are measurable.

This means also that you can straightforwardly define limiting procedures using random variables the same as you do for ordinary real numbers, because you never reach any contradiction from any set operation. These operations produce objects which are in general not L-objects, so if you want to translate these operations to the ordinary ZFC universe, it is a pain in the neck, and this is why the constructions of ever free field theory, say in the rigorous work of Sheffield, requires a long measure theoretic detour (which was not done by Sheffield, but by a previous author), even though the construction involved is ultimately very simple in terms of random variables (it's "pick infinitely many Gaussian random variables and Fourier transform").

Hairer just sidestepped every problem of measure, really by cheating. He simply defined the approximating SPDE solutions, and took the limit working in a well-developed "rigorous" field where people simply stopped worrying about measure paradoxes a long time ago. As far as I can see, the statisticians never resolved the measure paradoxes, they never explicitly work in a measurable universe, they simply made a social compact among themselves to ignore them, and eventually their results became important enough that nobody notices that that the results really only hold in a measurable universe. They way they ignore measure paradoxes is by speaking about random variables as if they were taking actual values (this is something intuitive you do also, everyone does it), and then taking limits using estimates on random variables, and mixing in theorems proved about the convergence of ordinary real numbers. In ordinary mathematics, every theorem about real number convergence with given estimates needs to be explicitly reproven for random variables to show that no nonmeasurable sets are produced in the proof.

It is possible that I am wrong, and the statisticians simply reproved all the theorems about real numbers for the random variable case, but I don't see where this happened. It is much more likely that they used their intuition to transfer all the theorems to random variables without worrying about paradoxes, and this simply means that they are working in a universe where every set is measurable.

Regarding your specific question about separating fast-falloff test functions using distributions in a measurable universe, I don't know the answer in detail, and I agree that it is a very good question (I think now, after you bring it up, that it is the central question of such approaches). What I know automatically is that the L-version of the theorem holds, so that L-test-functions are separated using L-distributions, and that's how I would state the result provisionally, before determining if it is true in the full universe. What I don't know is whether this extends to the measurable case, where one can also consider additional random fast-falloff functions (and random distributions). I believe the theorem should hold here, just because there is an explicit countable basis for the test-functions and an explicit continuity, which guarantees that once you know the action of L-distributions on L-functions, you can extend to the action on all (measurable) test-functions by continuity, but it is important to be sure, however, and have a general theorem for peace of mind. The reason this example is different from the linked double-dual question (where certain extensions stop being well defined in the measurable universe) is that in the double-dual case there is no continuity, so that the double dual needs to be defined even on very wild sequences of the dual space, these are the Gaussian random variable sequences with fast-growing norm.

Sure enough the Solovay model is intriguing. As I see/understand it, the (rigorous) knowledge of stochastic processes is easier and in a sense more satisfactory; the price you pay is that functional analysis (and to some degree also algebra and geometry, because of the "geometric" Hahn-Banach and Boolean prime ideal theorems) becomes (at least) more messy.

This is a matter of what you need. To formulate path integrals, Solovay model may be better; then to give meaning at the results as functional analytical or geometrical objects may be painful, because ultimately you will have to see what still holds in L. I don't know exactly, just speculating, but I'm afraid (a geometric version of) Hahn-Banach is also used in the theory of Lie groups. If it is so, then also in analyzing symmetries (e.g. of the action) you may have to be careful for the whole universe.

As I said, it is an intriguing point of view for sure, but you have to pay attention to a lot of annoying (but maybe very important) details. Obviously, this is true in almost any theory! ;-)

The context of the discussion (taken over from the other thread mentioned in the OP) was more restrictive and exclude your examples. I added a corresponding statement at the top of my answer.

Your answer

Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
Please consult the FAQ for as to how to format your post.
This is the answer box; if you want to write a comment instead, please use the 'add comment' button.
Live preview (may slow down editor)   Preview
Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:
p$\hbar$ysics$\varnothing$verflow
Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).
Please complete the anti-spam verification




user contributions licensed under cc by-sa 3.0 with attribution required

Your rights
...