To start out, I think it is important to clarify one issue. If we think of many quantum messages over which the information is spread, we can ask two questions that both seem consistent with your phrasing: 1) If I measure each message as I receive it, what is the information gained from that message, or 2) If I store the quantum messages, how much information can be extracted from the first $n-1$ of them, versus the first $n$ of them?
These problems are not the same, and can have very different answers depending on the encoding. Indeed, quantum messages sent over two or more zero capacity channels can contain non-zero information (see arXiv:0807.4935 for example). We've only known about weirdness like this for 3 years, so its worth keeping in mind that older papers might evoke additivity conjectures which have recently been proven false.
As regards the blackhole information problem, it is (2) that seems the most relevant. Holevo information gives an obvious way to bound the information contained in the system, and to calculate an upper bound on the rate of information leakage. This can probably be improved in the blackhole setting since there is no way to introduce additional randomness (which would essentially mean creating information).
The Holevo information is given by $\chi = S(\rho) - \sum_i p_i S(\rho_i)$, where $\rho = \sum_i p_i \rho_i$ is the density matrix for the system, and $\rho_i$ is the density matrix used to encode a particular message which occurs with probability $p_i$, and $S(\rho) = \mbox{Tr}(\rho \log_2 \rho)$ is the von Neumann entropy. The mutual information between the encoded information and the measurement outcomes is bounded from above by this quantity (see the link in Frédéric Grosshans' answer for a more detailed description of Holevo's theorem). Thus for a given encoding, you can calculate the Holevo information as a function of the number of messages received, which gives you the kind of thing you are looking for.
You also ask "For a generic decoding, is it known how much time you need in order to gain access to a finite fraction of the information? Are there some universal results about the asymptotics of such process (in the spirit of "critical exponents")?"
The answer to this is that there is a trivial upper bound on the time of infinity, and a lower bound of $n$ bits per $n$ qubits received which comes from Holevo's bound. Given more information about the process, better bounds are of course probably possible.
The reason for the infinite upperbound is as follows: If you encode quantum information with an error correction code which can detect errors on up to $d$ sites, it is necessarily the case that it is impossible to obtain the correct result for a measurement on the encoded information with probability better than guessing, if the measurement is restricted to be on less than $d$ sites.
Now, it is easy to construct codes with $d$ arbitrarily large as long as the encoding can be made even larger, and hence we get the infinite upper bound.
You can use the same trick to make pretty much any distribution you want, as long as it respects the lower bound given by Holevo information (which is tight for some encodings).
This post has been migrated from (A51.SE)