Quantcast
  • Register
PhysicsOverflow is a next-generation academic platform for physicists and astronomers, including a community peer review system and a postgraduate-level discussion forum analogous to MathOverflow.

Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion.

Please help promote PhysicsOverflow ads elsewhere if you like it.

News

PO is now at the Physics Department of Bielefeld University!

New printer friendly PO pages!

Migration to Bielefeld University was successful!

Please vote for this year's PhysicsOverflow ads!

Please do help out in categorising submissions. Submit a paper to PhysicsOverflow!

... see more

Tools for paper authors

Submit paper
Claim Paper Authorship

Tools for SE users

Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post

Users whose questions have been imported from Physics Stack Exchange, Theoretical Physics Stack Exchange, or any other Stack Exchange site are kindly requested to reclaim their account and not to register as a new user.

Public \(\beta\) tools

Report a bug with a feature
Request a new functionality
404 page design
Send feedback

Attributions

(propose a free ad)

Site Statistics

205 submissions , 163 unreviewed
5,082 questions , 2,232 unanswered
5,353 answers , 22,789 comments
1,470 users with positive rep
820 active unimported users
More ...

  Is there a Lagrangian formulation of statistical mechanics?

+ 10 like - 0 dislike
6228 views

In statistical mechanics, we usually think in terms of the Hamiltonian formalism. At a particular time $t$, the system is in a particular state, where "state" means the generalised coordinates and momenta for a potentially very large number of particles. (I'm interested primarily in classical systems for the sake of this question.) Since this state cannot be known precisely, we consider an ensemble of systems. By integrating each point in this ensemble forward in time (or, more often, by considering what would happen if we were able to perform such an integral), we deduce results about the ensemble's macroscopic behaviour. Using the Hamiltonian formalism is useful in particular because it gives us the concept of phase space volume, which is conserved under time evolution for an isolated system.

It seems to me that we could also consider ensembles within the Lagrangian formalism. In this case we would have a probability distribution over initial values of the coordinates (but not their velocities), and another distribution over the final values of the coordinates (but not their velocities). (Actually I guess these would need to be two jointly distributed random variables, since there could easily be correlations between the two.) This would then lead to a probability distribution over the paths the system takes to get from one to the other. I have never seen this Lagrangian approach mentioned in statistical mechanics. I'm curious about whether the idea has been pursued, and whether it leads to any useful results. In particular, I'm interested in whether the idea of phase space volume has any direct meaning in terms of such a Lagrangian ensemble.

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user Nathaniel
asked Jan 18, 2013 in Theoretical Physics by Nathaniel (495 points) [ no revision ]
Most voted comments show all comments
One obvious problem with lagrangians, though, would be that one cannot introduce a lagrangian for massless particles, whereas hamiltonian would still exist. The same problem appears, perhaps, when it comes to counting internal degrees of freedom and doing quantum statistics.

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user Alexey Bobrick
@AlexeyBobrick I think that the distinction lies in describing the state in terms of the positions of the system at two different times, instead of position and momentum at the same time. The two descriptions are valid even for massless particles.

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user becko
@becko The description is surely valid, at least formally. However, the dynamics in this case is to be described by action (lagrangian), which you cannot write for massless particles. Or can you?

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user Alexey Bobrick
@VijayMurthy For Wilson RG, all spin models get cast into continuous form which means you get Landau-Ginzburg-type Lagrangian.

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user Slaviks
@Slaviks, Thanks for the comment. One can write an action functional and need not do an RG. The OP asked for a Lagrangian description, not an RG. The MSR action functional can be written for particle systems too. So I dont get your comment. Perhaps I am missing something.

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user Vijay Murthy
Most recent comments show all comments
@VijayMurthy that looks interesting, and I'll look into it further. From those handwritten notes it looks like they're starting with some stochastic dynamics and then deriving something that looks like a path integral; whereas I'm hoping for something that starts with a classical Lagrangian and then derives a statistical ensemble based on it. But thanks, and I look forward to taking a closer look.

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user Nathaniel
I don't know the answer to your question, but must say I am intrigued to see where it leads. Feynman's advice was to try to understand things in as many ways as possible!

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user Michael Brown

4 Answers

+ 5 like - 0 dislike

The transition between the Hamiltonian and Lagrangian formalisms in mechanics can be accomplished by means of the Hamilton-Jacobi theory. Consider for example a classical statistical ensemble on a phase space $(x,p)$ defined by:

A. The (initial) state of this ensemble is defined by a distribution function $f(x_0,p_0)$ satisfying the normalization condition:

$\displaystyle{\int f(x_0,p_0) dx_0dp_0 = 1}$

($(x_0,p_0)$ are the initial conditions)

B. The time evolution is governed by the Hamiltonian function $H(x,p, t)$.

According to the Hamilton-Jacobi theory, there exists Hamilton-Jacobi phase function $S(x_0, x_1, t_0, t_1)$ satisfying the Hamilton-Jacobi equation:

$\displaystyle{\frac{\partial S}{\partial t}+H\left(x_1,\frac{\partial S}{\partial x_1}, t\right) = 0}$

(where $(x_1,p_1)$ are the coordinates and momenta at time $t$)

The momenta can be derived from the Hamilton-Jacobi phase function:

$\displaystyle{p_i = \frac{\partial S}{\partial x_i}}$

The problem of expressing the state of the system in terms of the initial and final coordinates is rendered to a problem of transformation of probability distributions. We can define the state of the system in the initial and final coordinates as:

$\displaystyle{F_t(x_0, x_1) = f\left(x_0,\frac{\partial S}{\partial x_1}(x_0, x_1, t) \right)}$

The trasnsformation Jacobian is given by:

$ \displaystyle{dx_0 dp_0 = \frac{\partial^2 S}{\partial x_0\partial x_1}}dx_0 dx_1 $

And the normalization condition:

$\displaystyle{\int F_t(x_0, x_1) \frac{\partial^2 S}{\partial x_0\partial x_1}(x_0, x_1, t)dx_0dx_1 = 1}$

In the general case, the joint distribution $F_t(x_0, x_1)$ will not be separable

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user David Bar Moshe
answered Feb 28, 2013 by David Bar Moshe (4,355 points) [ no revision ]
Most voted comments show all comments
@David I'm not sure I understand the dependence of $S$ on $t$ in this notation. It seems like $S$ should really be a function $S(x_0, x_1, t_0, t_1)$ - but then I'm not sure what to do with $\partial S/\partial t$. Unless $t$ is actually what I'm calling $t_1$, so we're varying the time of the end point of the path?

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user Nathaniel
Oops, I get it - I missed "$(x_1, p_1)$ are the coordinates and momenta at time $t$", so $t$ is indeed what I called $t_1$.

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user Nathaniel
...but then, shouldn't the Hamilton-Jacobi equation be written $$\frac{\partial S}{\partial t} + H\left( x_1, \frac{\partial S}{\partial x_1}, t \right)$$ in this notation? (I think so but don't want to correct it in case I've misunderstood something)

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user Nathaniel
@Nathaniel. You are correct, the differentials in the Hamilton-Jacobi equation should be with respect to the end point. Also, the time dependence that I wrote is not the most general. In the case of an explicitely time varying Hamiltonian, the phase function depends on $t_0$ and $t_1$ and not only on their difference. In this case the time differetiation is also with respect to the end point. In fact you may choose either one of the boundary points.

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user David Bar Moshe
@David thanks, that's exactly what I thought. I decided to award you a bounty instead of accepting the answer, because it's still possible that someone will know of a work in which this is applied to statistical mechanics, and I don't want them to be put off by the green tick. Your answer puts me well on the way to working out applications in stat. mech. for myself, so I'm very grateful. Annoyingly, I have to wait 24 hours before I can actually award the bounty to you.

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user Nathaniel
Most recent comments show all comments
@Nathaniel when $x_0$ and $x_1$ are the boundary points of a classical trajectory (satisfying the Hamilton's equations), then $S$ is the value of the classical action. Of course we can use this function for any two points on the configuration space not necessary along a classical trajectory. In this case the Hamilton Jacobi-phase function generates canonical transformations (for this reason, some authors call it the generating function). The basic application that I know of this function is in the solution of the Hamilton's equations given boundary and not initial conditions.

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user David Bar Moshe
@Nathaniel cont. In this case instead of an iterative procedure of guessing the initial momenta and checking the second end boundary condition, we can solve the Hamilton-Jacobi equation. The price is that it is a partial differential equation. A second applications is in the discrete formulation of mechanics (we can imagine that the time "t" to be a small time step). I haven't seen an application to statistical mechanics. This is why I thought that the question is very original. But I haven't searched hard enough.

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user David Bar Moshe
+ 3 like - 0 dislike

There is a field theory version of statistical physics. The temperature is like the imaginary time. In this way we can formulate theory by path integral with action determined by Lagrangian.

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user Xiao-Qi Sun
answered Mar 5, 2013 by Xiao-Qi Sun (30 points) [ no revision ]
I know about this. I wish I understood it better, but I don't think it's what I'm looking for. At least, the connection between the two ideas is not obvious.

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user Nathaniel
+ 2 like - 0 dislike

I am not sure if this is what you are up to (it is related to what Xiao-Qi Sun said) to but I'll give it a try too ...

At the beginning of Chapter V.2 of his QFT Nutshell, Anthony Zee explains how classical statistical mechanics (characterized by the corresponding partition function involving the Hamilton function) in $d$- dimensional space is related to Eucledian field theory (characterized by the corresponding generating functional or path integral involving the Lagrangian).

To see this relationship, consider for example the Minkowskian path integral of a scalar field

$$ (1) \,\, \cal{Z} = \int\cal{D}\phi e^{(i/\hbar)\int d^dx[\frac{1}{2}(\partial\phi)^2-V(\phi)]} = \int\cal{D}\phi e^{(i/\hbar)\int d^dx\cal{L}(\phi)} = \int\cal{D}\phi e^{(i/\hbar)S(\phi)} $$

Upon Wick rotation, the Lagrange density $\cal{L}(\phi)$ turns into the energy density and the action $S(\phi)$ gets replaced by the energy functional $\cal E(\phi)$ of the field $\phi$

$$ (2) \,\, \cal{Z} = \int\cal{D}\phi e^{(-1/\hbar)\int d^d_Ex[\frac{1}{2}(\partial\phi)^2+V(\phi)]} = \int\cal{D}\phi e^{(-1/\hbar)\cal{E}(\phi)} $$

with

$$ \cal E(\phi) = \int d^d_Ex[\frac{1}{2}(\partial\phi)^2+V(\phi)] $$

This can now be compared to the classical statistical mechanics of an N-particle system with the Energy

$$ E(p,q) = \sum_i \frac{1}{2m}p_i^2+V(q_1,q_2,\cdots,q_N) $$

and the corresponding partition function

$$ Z = \prod_i\int dp_i dq_i e^{-\beta E(p,q)} $$

Integrating over the momenta $p_i$ one obtains the reduced partition function

$$ Z = \prod_i\int dq_i e^{-\beta V(q_1,q_2,\cdots,q_N)} $$

Following the usual procedure to obtain the field theory which corresponds to this reduced partiction function by letting $i\rightarrow x$, $q_i \rightarrow \phi(x)$ and identifying $\hbar = 1/\beta = k_B T$ it has exactly the same form as the Euclidian path integral (2).

So it can finally be seen that in this example, the (reduced) partition function of an N-particle system in d-dimensional space corresponds to the path integral of a scaler field in d-dimensional spacetime.

These arguments can be further generalized to obtain a path integral representation of the quantum partition funcction, finite temperature Feynman diagrams, etc too ...

If I understand this right, this line of thought relating statistical mechanics to field theory is for example applied in topics like the Nonequilibrium functional renormalization group or in AdS/CFT to relate the correlation functions on the CFT side to the string amplitudes on the AdS side.

answered May 9, 2013 by Dilaton (6,240 points) [ revision history ]
Most voted comments show all comments
Since for example turbulence theory can be described by nonequilibrium statistical mechanics (in the MaxEnt formalism one would have the energy or enstrophy flux that are constant on an a scale invariant subrange as additional relevant variables to appear in the nonequilibrium distribution function), there must be a connection between the functional integrals of the Navier-Stokes equations and certain statistical mechanics partition functions too I think. But this I have not yet seen in to much detail up to now.

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user Dilaton
Mabe the relationship appears in this paper which should have a paragraph about turbulence too. I have not yet have time to look at it closer.

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user Dilaton
Are you talking about the work of Roddy Dewar by any chance? This talk of classical path integrals and MaxEnt puts me in mind of his work.

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user Nathaniel
@Nathaniel what exactly is the work of Roddy Dewar about? I dont know it ... I just know about the MaxEnt for both, classical and quantum mechanic systems from a nonequilibrium statistics course I have taken and this paper I found quite instructive and about how to finde a path integral formulation of the Navier Stokes equations I learned from this at the first time. The action of the Navier Stokes equation involves some crazy Grassmann and ghost fields too, ha ha ...

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user Dilaton
let us continue this discussion in chat

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user Nathaniel
Most recent comments show all comments
(For some reason my inbox wasn't pinged when you answered. I wonder why.)

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user Nathaniel
@Nathaniel I wrote the first version of this post way after midnight, so I had to hide it away first until I have profread and completed it in the morning ...

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user Dilaton
+ 0 like - 0 dislike

The Hamilton formulation of classical dynamics gives rise to a very strong and important theorem in statistical mechanics that is the Liouville theorem. As you probably know already it states that the probability density $\rho(\mathbf{r}, \mathbf{p})$ to be around a given point $(\mathbf{r}, \mathbf{p})$ in phase space follows the equation of evolution:

$\frac{\partial \rho}{\partial t} = \{\rho, H \}$ where $\{ \cdot\}$ denotes the Poisson brackets.

This equation is equivalent to Hamilton equations of evolution for $(\mathbf{r}, \mathbf{p})$.

Now, when you look at macrovariables, it can be worked out (it has been done first by Zwandsig I think) that the Liouville equation (for the microvariables) gives rise to a Fokker-Planck equation for these macrovariables. It is in spirit very similar to the Liouville equation except that there is a stochastic component in it whose simplest characteristic is to add a second space derivative on the right hand side of the evolution equation.

Now, if you know your maths, you also know that any Fokker-Planck equation can be associated to a set of stochastic equations for the macrovariables under study (one very famous being the Langevin equation)...and we are back to something very close to the Hamilton equations but for macrovariables.

In case you were wondering if there is a minimum action principle for these stochastic equations, I am not aware of that. I think, they are very similar to Shrodinger equation in this respect. However what it means is that indeed the macrovariable propagators can be expressed as path integrals. The Wiener measure is one typical case.

Note that my answer is focused on Hamilton and Lagrangian dynamics in the classical sense where they were used to compute trajectories in time.

In classical statistical mechanics, you could find a Lagrangian approach akin to what is done in, say, QFT. This would be the Landau-Ginsburg approach of phase transitions and complex systems in general.

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user gatsu
answered Mar 23, 2013 by gatsu (40 points) [ no revision ]
Thanks for the answer, but my question is about whether there is a formalism that uses the least action principle applied to the microvariables to directly construct an ensemble over phase space paths, without first deriving Hamiltonian equations of motion and then a Fokker-Planck equation.

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user Nathaniel
It sounds difficult for at least one reason. Least action principle is a functional of the generalized coordinates $\mathbf{q}(t)$ only so there is no such thing as phase space to begin with. Moreover, as far as I know, Lagrangian formalism is not suited to give the evolution of any function of $\mathbf{q}(t)$ so basically I don't quite see how it would work...but maybe you have something more precise in mind. Note however, that there is something called topological entropy that counts the number of possible paths in the system.

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user gatsu
The idea is that you would start with a joint distribution of $\mathbf{q}(t_0)$ and $\mathbf{q}(t_1)$, which should then uniquely specify the distribution over paths taken to get from initial to final points. That much is clear, but I'm interested in knowing what follows from such a line of reasoning. One possible application might be to give a concise answer to this question.

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user Nathaniel
Ok, I don't very much about this question but you might be interested in the work of Fabbrice Debbasch. Publication 20, although maybe too simple for you, may give you some hints on the directions to take.

This post imported from StackExchange Physics at 2014-03-17 04:24 (UCT), posted by SE-user gatsu

Your answer

Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
Please consult the FAQ for as to how to format your post.
This is the answer box; if you want to write a comment instead, please use the 'add comment' button.
Live preview (may slow down editor)   Preview
Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:
p$\hbar$ysicsOverflo$\varnothing$
Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).
Please complete the anti-spam verification




user contributions licensed under cc by-sa 3.0 with attribution required

Your rights
...