Quantcast
  • Register
PhysicsOverflow is a next-generation academic platform for physicists and astronomers, including a community peer review system and a postgraduate-level discussion forum analogous to MathOverflow.

Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion.

Please help promote PhysicsOverflow ads elsewhere if you like it.

News

PO is now at the Physics Department of Bielefeld University!

New printer friendly PO pages!

Migration to Bielefeld University was successful!

Please vote for this year's PhysicsOverflow ads!

Please do help out in categorising submissions. Submit a paper to PhysicsOverflow!

... see more

Tools for paper authors

Submit paper
Claim Paper Authorship

Tools for SE users

Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post

Users whose questions have been imported from Physics Stack Exchange, Theoretical Physics Stack Exchange, or any other Stack Exchange site are kindly requested to reclaim their account and not to register as a new user.

Public \(\beta\) tools

Report a bug with a feature
Request a new functionality
404 page design
Send feedback

Attributions

(propose a free ad)

Site Statistics

206 submissions , 164 unreviewed
5,103 questions , 2,249 unanswered
5,355 answers , 22,800 comments
1,470 users with positive rep
820 active unimported users
More ...

  Thermodynamic Speed Limit of Quantum Computing

+ 3 like - 0 dislike
1567 views

The speed of a real-world reversible computer scales linearly with applied force and entropy. To prove this, I will use $N$ as the number of steps (physical operations), $h$ as Planck’s constant, $k$ as Boltzmann’s constant, $S$ as entropy and $T$ as ambient temperature. By the Heisenberg energy-time uncertainty principle, $δE > h/δt$. Given that a quantum computer must be reversible, $δE$ will be retained as heat, so you get $δS = δΕ/T > h/(T \cdot δt)$. Total entropy obviously can't exceed $O(k)$. Considering the Planck-Boltzmann formula, $δS$ much greater than $k$ decoheres into $O(e^{δS/k})$ independent microstates, while you can only read out $O(e^{-δS/k})$ of them. This means that for $N$ steps, the averaged entropy per step must be of $O(k/N)$. Replacing $δS$ with $k/N$ in $δS > h/(T \cdot δt)$ gives $δt > h \cdot N/k \cdot T$ , so we get a runtime of $Ω(h \cdot N^2/k \cdot T)$. 

Grover’s and Shor’s algorithms don’t look too impressive under this basic analysis, especially after considering memory requirements and large $N$. Note that the Threshold Theorem assumes that the underlying physical error rate is a constant, independent of the size of the computation. However, the thermodynamic analysis shows that the minimum possible physical error rate is dependent on the size of the computation, due to the scaling of entropy generation. This argument is completely general and is prior to any arguments about engineering. MT and ML bounds dramatically overestimate achievable quantum evolution.

Furthermore, if you'll permit my rambling, QEC makes some very nonstandard assumptions about QFT. For instance, John D. Norton disputes the legitimacy of Landauer’s principle: 

The thermodynamics of computation assumes that computational processes at the molecular level can be brought arbitrarily close to thermodynamic reversibility and that thermodynamic entropy creation is unavoidable only in data erasure or the merging of computational paths, in accord with Landauer’s principle. The no-go result shows that fluctuations preclude completion of thermodynamically reversible processes. Completion can be achieved only by irreversible processes that create thermodynamic entropy in excess of the Landauer limit.

Landauer and Bennett ignore quantum fluctuations, which are a major impediment to their arguments. It's important to understand that QFT is a relativistic, nonlinear and infinite-dimensional theory of interacting fields (not particles), where measurement imposes discreteness by projecting into specific eigenstates. Discreteness is an interpretation, not a derivation. Uncertainty doesn't imply any form of speedup or analog error correction. In fact, there are now classical analogs to all the major aspects of QFT (see my other post). Spin is continuous, there's no "global phase" and amplitudes interfere, not probabilities. Gil Kalai has made arguments against QCs with reference to the harmonic analysis of Boolean functions, but I would suggest we're dealing with "analog" physics. Everything in the universe has "quantum supremacy", so I don't see how the concept is useful. Appeals to Turing complexity are circular.

The stated asymptotic runtime complexities of quantum algorithms are based on the number of calls to the underlying circuits/operations, not the number of logical gates. But even assuming QEC, a recent paper showed that the inefficiency of creating high-fidelity magic states poses a major challenge. The overhead of distilling non-Clifford Toffoli gates is massive (~130,000 physical qubits) and leads to huge slowdowns (~170μs): "...quadratic speedups will not enable quantum advantage on early generations of such fault-tolerant devices unless there is a significant improvement in how we would realize quantum error-correction...this conclusion persists even if we were to increase the rate of logical gates in the surface code by more than an order of magnitude..."
 
My Question: What is a straightforward answer to the fact that a QC needing $N$ operations takes at least $O(N^2)$ time to terminate?

asked Sep 29, 2023 in Applied Physics by Matthew D Cory [ revision history ]
edited Oct 5, 2023
Most voted comments show all comments

@ThomasKlimpel Under this analysis, Shor’s algorithm is O((log N)6), and QCs are only supposed to give advantage for large N. However, you can't error-correct infinite-dimensional fields, so logical qubits look to be a fantasy.

@MatthewDCorry I don't see why it should matter whether you use the size of the number to be factored as N (as you did) or the number of bits for N (as I did). It remains an exponential speedup over the best known classical algorithm.

P.S.: I am also not completely sure whether Matthew D Corry is still around at physicsoverflow.

@ThomasKlimpel I'm referring to physical operations, not inputs. There are memory requirements in addition to error-correction overhead. At present, these are enormous, so you misunderstand the bound. However, there are no "bits" in the quantum world. That's a basic misunderstanding of QFT. No one has been able to error-correct analog systems with continuous degrees of freedom to create significant numbers of logical qubits. QC people are rejecting the present ontology of the Standard Model. Even Unruh radiation was recently confirmed experimentally. Quantum fields are infinite-dimensional and subject to chaos as much as any analog system. Bound states don't change that fact any more than partly empty beer bottles.

Shor's Algorithm Does Not Factor Large Integers in the Presence of Noise
https://arxiv.org/abs/2306.10072

@MatthewDCory

Ever heard of GKP codes?

https://arxiv.org/abs/2106.12989

Also it has been known for more than two decades that quantum computers can simulate topological quantum field theories and vice-versa:

https://arxiv.org/abs/quant-ph/0001071

The thermodynamics of quantum error correction has been figured out almost three decades ago:

https://www.arxiv.org/abs/quant-ph/9706064

Now where is your BPP factoring algorithm?

@deeznuts You didn't read a thing I wrote. Topological invariants precisely destroy the degrees of freedom that give rise to the parallelism, so I don't care. Low-dimensional computation. Slow. The paper you referenced is by a computer scientist and not a physicist with deep knowledge of QFT. CS majors have been stuck in quantum formalisms of the 1920s and have no idea what they are talking about. You can't error correct infinite-dimensional fields just because you can amplify up to an abstract quantum state in a macro measurement process. QFT and GR are described by classical field equations. FULL STOP! See Tian Yu Cao's points about structural realism. He is the current historian of field theory. Furthermore, the argument presented here about thermodynamics is completely general. QEC assumes a constant error rate but that's clearly not what happens. Elementary.

Most recent comments show all comments

What is QEC (third paragraph)?

@ArnoldNeumaier Sorry, Quantum Error Correction (QEC): https://en.wikipedia.org/wiki/Quantum_error_correction

Your answer

Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
Please consult the FAQ for as to how to format your post.
This is the answer box; if you want to write a comment instead, please use the 'add comment' button.
Live preview (may slow down editor)   Preview
Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:
p$\hbar$ysicsOve$\varnothing$flow
Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).
Please complete the anti-spam verification




user contributions licensed under cc by-sa 3.0 with attribution required

Your rights
...