The theory of ensembles can be viewed from many points of view (we don't need to imagine any representative systems, for example). There are many more ensembles commonly used in physics than the microcanonical, canonical, and grand-canonical. For example there's the angular-momentum ensemble, described by Gibbs himself (Elementary Principles in Statistical Mechanics, chap. IV around eqn (98)), the pressure ensemble, eg:
the Gaussian ensemble, eg:
the evaporative ensemble, eg:
and see also eg
In fact, there's an infinity of ensembles. An ensemble is simply a probability distribution, usually constant in time, over the possible (micro)states of a system. This probability is associated to a specific preparation protocol, and reflects any regularities in the observation of values of several quantities ("observables") of the system under that preparation; typically macroscopic observables, and typically constants of the motion. Energy and particle number are just examples.
That is: when we prepare a system according to a specific experimental procedure, which doesn't fix its microscopic state but gives rise to reproducible observations (over repeated identical preparations) of some quantities, then we can associate a probability distribution to such preparation. This distribution gives, for each microstate, the probability that the system has that microstate as a result of that preparation. This probability distribution is constructed in such a way as to reflect the statistical properties of that preparation under repetitions. (A "preparation" can also be a passive act; that is, we wait for the system that results from a particular physical phenomenon.)
The canonical ensemble, for example, specifies that on average we should observe a particular value of the total energy of the system, under repetitions of a specific preparation. The microcanonical ensemble specifies that we should observe in every instance a single specific value of the total energy. The Gaussian ensemble mentioned above specifies an average observed total energy, and also a specific standard deviation around it, under repeated preparations.
Good old and recent reviews of how ensembles are constructed in general are eg:
The general idea is as follows. When we choose a probability distribution over the microstates of our system, this distribution induces probability distributions over every observable of the system, because an observable is a function of the microstate. We fix the statistical properties that these derived distributions should have, for any observables of interest, to express the statistical regularities observed in the preparation procedure. This fixing doesn't determine a unique probability distribution, but identifies a set of probability distributions having the desired properties. From this set we select the "broadest" probability distribution, where broadness is defined as its Shannon entropy (with respect to some base measure in state space).
Some of the articles cited above also show that different ensembles can be considered equivalent only in specific situations, typically for systems with many degrees of freedom. In general they are not equivalent and one must pay attention to which one is using; this is especially true for small systems such as atomic nuclei.
This post imported from StackExchange Physics at 2025-01-23 14:43 (UTC), posted by SE-user pglpm