On the logical, i.e., mathematical level, probability is specified by a probability space consisting of a space $\Omega$ of all conceivable experiments relevant in the situation to which probability is applied, a sigma algebra of measurable subsets of $\Omega$, and a probability measure assigning a probability to each measurable subsets. One can then define random variables as numbers $x(\omega)$ that depend on the experiment $\omega\in\Omega$ (formally measurable functions from $\Omega$ to the complex numbers), their expectations (formally integrals with respect to the measure), their variance, standard deviation, etc., and hence make the usual statistical predictions together with error estimates.
Thus from a purely logical point of view, probabilities are statements about sets of experiments called (in physics) ensembles. Talking about the probability of something always means to embed this something into an imagined ensemble of which this something is a more or less typical case. Changing the ensemble (for example by assuming additional knowledge) changes the probabilities and hence the meaning. In mathematics, this is modeled by the concept of a conditional expectation - the condition refers to how the ensemble is selected.
Without stating the condition and hence specifying the ensemble, nothing at all can be predicted by a probabilistic model. Given the model (i.e., having fixed the ensemble) one can, however, predict expectations and standard deviations of random variables. By the law of large numbers, these predictions are valid empirically (and hence operationally verifiable) when the expectation is replaces by sufficiently many independent identically distributed realizations of the experiment.
The fact that this ensemble is necessarily imagined, i.e., must be selected from an infinity of possible conditions, implies already that it depends on the subject that uses the model. (There are so-called non-informative priors that attempt to get around this subjectivity, but choosing an ensemble through a non-informative prior is still a choice that has to be made, and hence subjective. Moreover, in the cases of interest to physics, noninformative priors usually do not even exist. For example, there is no sensible noninformative prior on the set of natural numbers or the set of real numbers that would define a probability distribution.)
Objectivity (and hence a scientific description) arises only if the ensemble is agreed upon. This agreement (if it exists) is a social convention of the scientists in our present culture; to the extent that such an agreement exist, a model may be considered objective.
Even within the limited domain of objectivity within the social convention of our present culture, verifying a probabilistic model requires the ability of performing sufficiently many independent realizations of the experiment. This is given in case of microscopic quantum mechanics since the models there are about molecular or submolecular entities and these are (according to our present understanding of the laws of Nature) identical throughout the universe. This makes it feasible to prepare many microscopic systems independently and with identically distributed to sufficient precision for checking predictions.
However, when applied to macroscopic systems, this is no longer the case. Already for a gram of ordinary matter (e.g. pure water) we (or any conceivable subject confined to our universe) can prepare only the macroscopic (hydrodynamic) degrees of freedom. And it is imposssible to make independent replications of the Earth, the Sun, or our Galaxy. These are unique objects, for which a probabilistic analysis is logically meaningless since the size of the corresponding ensemble is 1, and no law of large numbers applies.
The way we apply statistical mechanics to the sun is by predicting not the state of the sun but the state of tiny few body systems modeling nuclear reactions within the sun. Similarly, the statistical mechanics of galaxies used in cosmology reduces the number of degrees of freedom of each galaxy to a few numbers, again reducing 'our' galaxy to an anonymous typical element of a valid ensemble.
Thus on the level of individuals, probability concepts lose their logical basis and their operationally verifiable character.
Now consider a model of the whole universe. Since it contains our unique Earth, and since we may define our universe as the smallest closed physical system containing the Earth, our universe is unique. It has the same character as the Earth, the Sun, or the Galaxy. By the same argument as above, one can apply statistical concepts to little pieces of the universe, but not to the universe itself. Therefore probability concepts applied to the universe have no logical basis and no operationally verifiable character. Using them in this context is logically meaningless.
This may be the reason why the Many Worlds view is popular. This view asserts (without any experimental evidence) that our universe is only one from many other universes that are independent and identically distributed. However, nobody ever spelled out a sound probability distribution for the resulting ensemble. There are infinitely many choices, and all of them have exactly the same observable consequences - namely none at all. For whatever we observe experimentally is an observation about some random variables of our own universe, hence of the unique universe that we happen to be in. It is absolutely impossible to observe anything about any other of the assumed universes; since observation requires interaction, and by definition, a closed system doesn't interact with anything outside it.
Hence we cannot check any nontrivial assertion about the ensemble of universes. This makes any discussion of it purely subjective and unscientific. The Many Worlds view may be popular but it has nothing to do with science. It is pure science fiction.
Since, as we saw, probability concepts applied to the universe are logically meaningless, any logically sound theory of everything must necessarily be deterministic. That we haven't yet found one that is satisfying is the usual course of science; it implies that we are not yet at the end of the road of discoveries to be made.