The speed of a real-world reversible computer scales linearly with applied force and entropy. To prove this, I will use $N$ as the number of steps (physical operations), $h$ as Planck’s constant, $k$ as Boltzmann’s constant, $S$ as entropy and $T$ as ambient temperature. By the Heisenberg energy-time uncertainty principle, $δE > h/δt$. Given that a quantum computer must be reversible, $δE$ will be retained as heat, so you get $δS = δΕ/T > h/(T \cdot δt)$. Total entropy obviously can't exceed $O(k)$. Considering the Planck-Boltzmann formula, $δS$ much greater than $k$ decoheres into $O(e^{δS/k})$ independent microstates, while you can only read out $O(e^{-δS/k})$ of them. This means that for $N$ steps, the averaged entropy per step must be of $O(k/N)$. Replacing $δS$ with $k/N$ in $δS > h/(T \cdot δt)$ gives $δt > h \cdot N/k \cdot T$ , so we get a runtime of $Ω(h \cdot N^2/k \cdot T)$.
Grover’s and Shor’s algorithms don’t look too impressive under this basic analysis, especially after considering memory requirements and large $N$. Note that the Threshold Theorem assumes that the underlying physical error rate is a constant, independent of the size of the computation. However, the thermodynamic analysis shows that the minimum possible physical error rate is dependent on the size of the computation, due to the scaling of entropy generation. This argument is completely general and is prior to any arguments about engineering. MT and ML bounds dramatically overestimate achievable quantum evolution.
Furthermore, if you'll permit my rambling, QEC makes some very nonstandard assumptions about QFT. For instance, John D. Norton disputes the legitimacy of Landauer’s principle:
The thermodynamics of computation assumes that computational processes at the molecular level can be brought arbitrarily close to thermodynamic reversibility and that thermodynamic entropy creation is unavoidable only in data erasure or the merging of computational paths, in accord with Landauer’s principle. The no-go result shows that fluctuations preclude completion of thermodynamically reversible processes. Completion can be achieved only by irreversible processes that create thermodynamic entropy in excess of the Landauer limit.
Landauer and Bennett ignore quantum fluctuations, which are a major impediment to their arguments. It's important to understand that QFT is a relativistic, nonlinear and infinite-dimensional theory of interacting fields (not particles), where measurement imposes discreteness by projecting into specific eigenstates. Discreteness is an interpretation, not a derivation. Uncertainty doesn't imply any form of speedup or analog error correction. In fact, there are now classical analogs to all the major aspects of QFT (see my other post). Spin is continuous, there's no "global phase" and amplitudes interfere, not probabilities. Gil Kalai has made arguments against QCs with reference to the harmonic analysis of Boolean functions, but I would suggest we're dealing with "analog" physics. Everything in the universe has "quantum supremacy", so I don't see how the concept is useful. Appeals to Turing complexity are circular.
The stated asymptotic runtime complexities of quantum algorithms are based on the number of calls to the underlying circuits/operations, not the number of logical gates. But even assuming QEC, a recent paper showed that the inefficiency of creating high-fidelity magic states poses a major challenge. The overhead of distilling non-Clifford Toffoli gates is massive (~130,000 physical qubits) and leads to huge slowdowns (~170μs): "...quadratic speedups will not enable quantum advantage on early generations of such fault-tolerant devices unless there is a significant improvement in how we would realize quantum error-correction...this conclusion persists even if we were to increase the rate of logical gates in the surface code by more than an order of magnitude..."
My Question: What is a straightforward answer to the fact that a QC needing $N$ operations takes at least $O(N^2)$ time to terminate?