# Why Does Lattice QCD Use Heavier Than Physical Masses For Calculations?

+ 8 like - 0 dislike
65 views

It is common place (e.g. here), for Lattice QCD calculations to be computed using reference masses (such as the pion mass) which are greater that the physical values of those quantities.

Sometimes, multiple calculations are done at various heavier values so as to extrapolate down to the physical value.

The problem with this is that QCD is not entirely scale independent, even thought the QCD coupling constant is dimensionless.

For example, I've seen a credible claim that a bound dineutron state is stable at sufficiently higher quark masses than are measured experimentally (also here), even though bound dineutron states are not stable at physical masses.

I presume that Lattice QCD uses greater than physical masses because it is harder to do the calculations at the physical masses than at the heavier than physical masses, but I have trouble understanding why this should be so mathematically.

Could someone please explain the reason that Lattice QCD calculations are routinely done at greater than physical masses, rather than at physical masses?

This post imported from StackExchange Physics at 2017-11-22 17:13 (UTC), posted by SE-user ohwilleke
retagged Nov 22, 2017
Perhaps best to ask the authors of the paper. From my Stat-Mech background, I can imagine that smaller masses lead to larger correlation lengths which would make it necessary to use a larger lattice (to prevent finite size artifacts from affecting the simulations) ...

This post imported from StackExchange Physics at 2017-11-22 17:13 (UTC), posted by SE-user Count Iblis
@CountIblis If it were particular to the paper, I'd ask them, but this seems to be generally true for almost all QCD papers.

This post imported from StackExchange Physics at 2017-11-22 17:13 (UTC), posted by SE-user ohwilleke

+ 9 like - 0 dislike

Lattice QCD calculations involve computing the inverse of the Dirac operator $\gamma\cdot D+m$. The difficulty of inverting an operator is controlled by its smallest eigenvalues, and computing the inverse of the Dirac operator becomes harder as $am\to 0$. The exact scaling of the computational cost depends on the algorithms. It was once feared that realistic simulations with physical quark masses would be prohibitively expensive, but after some algorithmic improvements things look much better. Currently $${\rm cost} \sim \left(\frac{1}{m}\right)^{(1-2)} \left(\frac{1}{a}\right)^{(4-6)} \left(L\right)^{(4-5)},$$ and simulations with physical masses are expensive, but doable.

There is an additional problem with $m\to 0$, which is related to the fact that finite volume effects are controlled by $m_\pi L$ and $m_\pi^2\sim m_q$. This problem is not severe for physical masses, because $m_\pi^{-1}\sim 1.4$ fm is not that large.

Regarding the neutron mass: $m_n-m_p$ is controlled by the difference of the quark masses, compared to the electromagnetic self energy of the proton. If you let both masses (up and down) go to zero, then the neutron will eventually be lighter than the proton (and therefore stable). There is a magic range of quark masses for which $|m_n-m_p|<m_e$, so that both the neutron and proton are stable.

And the di-neutron: In the real world the deuteron is a shallow bound state, and the dineutron is just barely unbound. I don't think that this is a theorem, but intuition and numerical evidence suggest that the dineutron would become bound for heavier quark masses.

Finally, as noted by David, (multi) nucleon calculations suffer from a noise problem that is controlled by the quark mass. The signal-to-noise ratio in an $A$ nucleon correlator scales as $\exp(-A(m_N-3m_\pi/2)\tau)$, where $\tau$ is the time separation at which the correlator is computed. The typical $\tau$ we are interested in is a physical scale (something like the inverse of the separation between the ground state and the first excited state), and if anything it is larger for bigger $A$. This means that calculations for $A>1$ are typically done for unphysical quark masses.

This post imported from StackExchange Physics at 2017-11-22 17:13 (UTC), posted by SE-user Thomas
answered Mar 28, 2017 by (130 points)
+ 7 like - 0 dislike

This is a supplement to Thomas's answer from March. I don't yet have enough reputation to add a comment, so let me write something not quite so brief but hopefully still clear.

Although certain lattice QCD calculations with physical masses are currently feasible (and underway), many projects still need to use heavier masses. This is particularly true of work involving multiple baryons like the dineutron studies mentioned in the question. For example, the recent arXiv:1706.06550 still uses ~800-MeV pions.

A significant factor complicating these calculations at lighter masses is a signal-to-noise problem. To over-simplify a 'classic' argument attributed to Peter Lepage, the signal in these calculations decays like $\exp[-m_N t]$ where $m_N$ is the mass of the hadron of interest (let's say a neutron) and $t$ is the time since its creation. The noise $\sigma^2(t)$, however, decays like $\exp[-m_L t]$ where $t$ is the same but $m_L$ is the mass of the lightest state that can be created by the square of the creation operator. The square of a neutron creation operator couples not only to a neutron--antineutron pair, but also to three pions. Therefore one expects to see (and does see) the signal-to-noise ratio degrading $\sim \exp\left[-(m_N - 3m_{\pi}/2)t\right]$.

This problem essentially vanishes in the limit of infinite quark mass, $m_q \to \infty$, where we expect $m_{\pi} \simeq \frac{2}{3}m_N$ just from counting valence quarks. One the other hand, it's most severe in the opposite limit $m_q \to 0$ where $m_{\pi} \to 0$ while $m_N$ remains non-zero. Since no clever trick has so far been found to solve this problem (although some efforts are ongoing), the exponential degradation of the signal-to-noise ratio at lighter masses would naively require a compensating exponential increase in statistics and hence in computational cost. Heavier-than-physical masses move us farther from the worst case and make calculations feasible with existing technology.

As an aside, the alert reader may be wondering why we don't just make $t$ smaller (i.e., measure the nucleon(s) at very short times after their creation). This indeed has been an important source of progress. It's easier said than done, however, because the creation operators couple to all states with the appropriate quantum numbers. So one ends up with "excited state" artifacts that decay like $\exp[-m_H t]$ with $m_H > m_N$. Therefore $t$ needs to be large enough for the excited-state contaminations to decay away but small enough for the desired signals to remain visible above the noise. (This special range of $t$ is sometimes called the golden window.) In order to be able to use smaller $t$ folks have had to construct improved creation operators that have more overlap with the desired ground state and less overlap with the unwanted excited states. This has been done with significant success (mentioned briefly in Section III.A of the arXiv:1706.06550 linked above).

As another addendum to Thomas's answer, let me also add that although $m_{\pi}^{-1} \simeq 1.4~\mbox{fm}$ "is not that large", to avoid finite-size artifacts lattice QCD calculations require $L^4$ lattices with $L$ several times larger than $m_{\pi}^{-1}$. For the simplest calculations (where physical masses are feasible) it can suffice to have $L\cdot m_{\pi} \gtrsim 4$. With a representative lattice spacing $a \approx 0.1~\mbox{fm}$, this $L \simeq 5.6~\mbox{fm}$ implies lattices of size at least $60^4$, corresponding to Dirac operator matrices of dimension $\gtrsim 4\cdot 12\cdot 60^4 \sim 6\times 10^8$. Inverting tens of thousands of roughly $\mbox{billion}\times\mbox{billion}$ (sparse) matrices is feasible, but not easy.

Of course, things get harder when the lattice volume needs to be big enough to contain two (possibly unbound) baryons. The calculation in arXiv:1706.06550 therefore uses $L\cdot m_{\pi} \simeq 20$, mostly achieved thanks to the heavier-than-physical masses but also helped by a rather large lattice spacing $a \approx 0.15~\mbox{fm}$ (which increases the discretization artifacts contaminating the results).

This post imported from StackExchange Physics at 2017-11-22 17:13 (UTC), posted by SE-user David Schaich
answered Nov 13, 2017 by (70 points)

 Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysic$\varnothing$OverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.