In various local update schemes of Monte Carlo method (such as Metropolis algorithm), there's a general phenomenon known as critical slowdown, which is the slowdown of convergence to thermal equilibrium near critical temperature. This is intuitively easy to understand: a long range correlation needs to be established near critical point, while local updates only update the system in very short ranges at each step.
I wonder if there's a more mathematically precise way of proving this. In particular, since every Monte Carlo method is essentially a Markov process with equilibrium configuration having the largest eigenvalue (which is 1) for the Markov matrix, a slowdown has to mean there's a second eigenvalue extremely near 1, how do we prove this, say, for Metropolis algorithm?