In Preskill's quantum computing notes Chapter 7 approximate page 82, he shows that a Pauli channel has capacity $Q \geq 1-H(p_I,p_X,p_Y,p_Z)$ where $H$ is Shannon entropy and $p_I, p_X, p_Y, p_Z$ are the probabilities of the channel acting like the appropriate Pauli matrix. In particular this gives us the 'hashing bound' or 'random coding bound' for the quantum capacity of the depolarizing channel $Q(p) \geq 1-H(p,1-p)-p\log_23$.
He then describes work of Shor and Smolin [1]: if you take a $m$-repetition code and concatenate it with a suitable random code you can do better than the hashing bound. The argument for this is that taking $m-1$ measurements the inner repetition code thought of as a super channel is a Pauli channel with entropy $H_i$. Then averaging over the $2^{m-1}$ possible classical measurements you can find the average entropy of the superchannel $\langle H \rangle$.
[1] P.W. Shor and J.A. Smolin, “Quantum Error-Correcting Codes Need Not Completely Reveal the Error Syndrome” quant-ph/9604006; D.P. DiVincen, P.W. Shor, and J.A. Smolin, “Quantum Channel Capacity of Very Noisy Channels,” quant-ph/9706061.
Then by random coding on this new channel you can achieve a rate $R=\frac{1-\langle H \rangle}{m}$ (dividing by $m$ to get this rate in bits/original channel use).
I don't see how random coding works. You have a random code which is optimal for each particular channel but how do you decide which one to use? By the time you know the classical measurements for your channel you have already sent the codeword.
So two questions:
1) If you have an ensemble of Pauli channels with average entropy $\langle H\rangle$, can you by using random coding achieve a rate $1-\langle H \rangle$?
2) If you can't do this, am I misinterpreting the results of Shor and Smolin or Preskill's exposition?
This post has been migrated from (A51.SE)