Ergodic
Get Ergodic essential facts below. View Videos or join the Ergodic discussion. Add Ergodic to your PopFlock.com topic list for future reference or share this resource on social media.
Ergodic

In mathematics, ergodicity expresses the idea that a point of a moving system, either a dynamical system or a stochastic process, will eventually visit all parts of the space that the system moves in, in a uniform and random sense. This implies that the average behavior of the system can be deduced from the trajectory of a "typical" point. Equivalently, a sufficiently large collection of random samples from a process can represent the average statistical properties of the entire process. Ergodicity is a property of the system; it is a statement that the system cannot be reduced or factored into smaller components. Ergodic theory is the study of systems possessing ergodicity.

Ergodic systems occur in a broad range of systems in physics and in geometry. This can be roughly understood to be due to a common phenomenon: the motion of particles, that is, geodesics on a hyperbolic manifold are divergent; when that manifold is compact, that is, of finite size, those orbits return to the same general area, eventually filling the entire space.

Ergodic systems capture the common-sense, every-day notions of randomness, such that smoke might come to fill all of a smoke-filled room, or that a block of metal might eventually come to have the same temperature throughout, or that flips of a fair coin may come up heads and tails half the time. A stronger concept than ergodicity is that of mixing, which aims to mathematically describe the common-sense notions of mixing, such as mixing drinks or mixing cooking ingredients.

The proper mathematical formulation of ergodicity is founded on the formal definitions of measure theory and dynamical systems, and rather specifically on the notion of a measure-preserving dynamical system. The origins of ergodicity lie in statistical physics, where Ludwig Boltzmann formulated the ergodic hypothesis.

## Informal explanation

Ergodicity occurs in broad settings in physics and mathematics. All of these settings are unified by a common mathematical description, that of the measure-preserving dynamical system. An informal description of this, and a definition of ergodicity with respect to it, is given immediately below. This is followed by a description of ergodicity in stochastic processes. They are one and the same, despite using dramatically different notation and language. A review of ergodicity in physics, and in geometry follows. In all cases, the notion of ergodicity is exactly the same as that for dynamical systems; there is no difference, except for outlook, notation, style of thinking and the journals where results are published.

### Measure-preserving dynamical systems

The mathematical definition of ergodicity aims to capture ordinary every-day ideas about randomness. This includes ideas about systems that move in such a way as to (eventually) fill up all of space, such as diffusion and Brownian motion, as well as common-sense notions of mixing, such as mixing paints, drinks, cooking ingredients, industrial process mixing, smoke in a smoke-filled room, the dust in Saturn's rings and so on. To provide a solid mathematical footing, descriptions of ergodic systems begin with the definition of a measure-preserving dynamical system. This is written as ${\displaystyle (X,{\mathcal {A}},\mu ,T).}$

The set ${\displaystyle X}$ is understood to be the total space to be filled: the mixing bowl, the smoke-filled room, etc. The measure ${\displaystyle \mu }$ is understood to define the natural volume of the space ${\displaystyle X}$ and of its subspaces. The collection of subspaces is denoted by ${\displaystyle {\mathcal {A}}}$, and the size of any given subset ${\displaystyle A\subset X}$ is ${\displaystyle \mu (A)}$; the size is its volume. Naively, one could imagine ${\displaystyle {\mathcal {A}}}$ to be the power set of ${\displaystyle X}$; this doesn't quite work, as not all subsets of a space have a volume (famously, the Banach-Tarski paradox). Thus, conventionally, ${\displaystyle {\mathcal {A}}}$ consists of the measurable subsets--the subsets that do have a volume. It is always taken to be a Borel set--the collection of subsets that can be constructed by taking intersections, unions and set complements; these can always be taken to be measurable.

The time evolution of the system is described by a map ${\displaystyle T:X\to X}$. Given some subset ${\displaystyle A\subset X}$, its map ${\displaystyle T(A)}$ will in general be a deformed version of ${\displaystyle A}$ - it is squashed or stretched, folded or cut into pieces. Mathematical examples include the baker's map and the horseshoe map, both inspired by bread-making. The set ${\displaystyle T(A)}$ must have the same volume as ${\displaystyle A}$; the squashing/stretching does not alter the volume of the space, only its distribution. Such a system is "measure-preserving" (area-preserving, volume-preserving).

A formal difficulty arises when one tries to reconcile the volume of sets with the need to preserve their size under a map. The problem arises because, in general, several different points in the domain of a function can map to the same point in its range; that is, there may be ${\displaystyle x\neq y}$ with ${\displaystyle T(x)=T(y)}$. Worse, a single point ${\displaystyle x\in X}$ has no size. These difficulties can be avoided by working with the inverse map ${\displaystyle T^{-1}:{\mathcal {A}}\to {\mathcal {A}}}$; it will map any given subset ${\displaystyle A\subset X}$ to the parts that were assembled to make it: these parts are ${\displaystyle T^{-1}(A)\in {\mathcal {A}}}$. It has the important property of not losing track of where things came from. More strongly, it has the important property that any (measure-preserving) map ${\displaystyle {\mathcal {A}}\to {\mathcal {A}}}$ is the inverse of some map ${\displaystyle X\to X}$. The proper definition of a volume-preserving map is one for which ${\displaystyle \mu (A)=\mu (T^{-1}(A))}$ because ${\displaystyle T^{-1}(A)}$ describes all the pieces-parts that ${\displaystyle A}$ came from.

One is now interested in studying the time evolution of the system. If a set ${\displaystyle A\in {\mathcal {A}}}$ eventually comes to fill all of ${\displaystyle X}$ over a long period of time (that is, if ${\displaystyle T^{n}(A)}$ approaches all of ${\displaystyle X}$ for large ${\displaystyle n}$), the system is said to be ergodic. If every set ${\displaystyle A}$ behaves in this way, the system is a conservative system, placed in contrast to a dissipative system, where some subsets ${\displaystyle A}$ wander away, never to be returned to. An example would be water running downhill -- once it's run down, it will never come back up again. The lake that forms at the bottom of this river can, however, become well-mixed. The ergodic decomposition theorem states that every ergodic system can be split into two parts: the conservative part, and the dissipative part.

Mixing is a stronger statement than ergodicity. Mixing asks for this ergodic property to hold between any two sets ${\displaystyle A,B}$, and not just between some set ${\displaystyle A}$ and ${\displaystyle X}$. That is, given any two sets ${\displaystyle A,B\in {\mathcal {A}}}$, a system is said to be (topologically) mixing if there is an integer ${\displaystyle N}$ such that, for all ${\displaystyle A,B}$ and ${\displaystyle n>N}$, one has that ${\displaystyle T^{n}(A)\cap B\neq \varnothing }$. Here, ${\displaystyle \cap }$ denotes set intersection and ${\displaystyle \varnothing }$ is the empty set. Other notions of mixing include strong and weak mixing, which describe the notion that the mixed substances intermingle everywhere, in equal proportion. This can be non-trivial, as practical experience of trying to mix sticky, gooey substances shows.

### Ergodic processes

The above discussion appeals to a physical sense of a volume. The volume does not have to literally be some portion of 3D space; it can be some abstract volume. This is generally the case in statistical systems, where the volume (the measure) is given by the probability. The total volume corresponds to probability one. This correspondence works because the axioms of probability theory are identical to those of measure theory; these are the Kolmogorov axioms.

The idea of a volume can be very abstract. Consider, for example, the set of all possible coin-flips: the set of infinite sequences of heads and tails. Assigning the volume of 1 to this space, it is clear that half of all such sequences start with heads, and half start with tails. One can slice up this volume in other ways: one can say "I don't care about the first ${\displaystyle n-1}$ coin-flips; but I want the ${\displaystyle n}$'th of them to be heads, and then I don't care about what comes after that". This can be written as the set ${\displaystyle (*,\cdots ,*,h,*,\cdots )}$ where ${\displaystyle *}$ is "don't care" and ${\displaystyle h}$ is "heads". The volume of this space is again (obviously!) one-half.

The above is enough to build up a measure-preserving dynamical system, in its entirety. The sets of ${\displaystyle h}$ or ${\displaystyle t}$ occurring in the ${\displaystyle n}$'th place are called cylinder sets. The set of all possible intersections, unions and complements of the cylinder sets then form the Borel set ${\displaystyle {\mathcal {A}}}$ defined above. In formal terms, the cylinder sets form the base for a topology on the space ${\displaystyle X}$ of all possible infinite-length coin-flips. The measure ${\displaystyle \mu }$ has all of the common-sense properties one might hope for: the measure of a cylinder set with ${\displaystyle h}$ in the ${\displaystyle m}$'th position, and ${\displaystyle t}$ in the ${\displaystyle k}$'th position is obviously 1/4, and so on. These common-sense properties persist for set-complement and set-union: everything except for ${\displaystyle h}$ and ${\displaystyle t}$ in locations ${\displaystyle m}$ and ${\displaystyle k}$ obviously has the volume of 3/4. All together, these form the axioms of a sigma-additive measure; measure-preserving dynamical systems always use sigma-additive measures. For coin flips, this measure is called the Bernoulli measure.

For the coin-flip process, the time-evolution operator ${\displaystyle T}$ is the shift operator that says "throw away the first coin-flip, and keep the rest". Formally, if ${\displaystyle (x_{1},x_{2},\cdots )}$ is a sequence of coin-flips, then ${\displaystyle T(x_{1},x_{2},\cdots )=(x_{2},x_{3},\cdots )}$. The measure is obviously shift-invariant: as long as we are talking about some set ${\displaystyle A\in {\mathcal {A}}}$ where the first coin-flip ${\displaystyle x_{1}=*}$ is the "don't care" value, then the volume ${\displaystyle \mu (A)}$ does not change: ${\displaystyle \mu (A)=\mu (T(A))}$. In order to avoid talking about the first coin-flip, it is easier to define ${\displaystyle T^{-1}}$ as inserting a "don't care" value into the first position: ${\displaystyle T^{-1}(x_{1},x_{2},\cdots )=(*,x_{1},x_{2},\cdots )}$. With this definition, one obviously has that ${\displaystyle \mu (T^{-1}(A))=\mu (A)}$ with no constraints on ${\displaystyle A}$. This is again an example of why ${\displaystyle T^{-1}}$ is used in the formal definitions.

The above development takes a random process, the Bernoulli process, and converts it to a measure-preserving dynamical system ${\displaystyle (X,{\mathcal {A}},\mu ,T).}$ The same conversion (equivalence, isomorphism) can be applied to any stochastic process. Thus, an informal definition of ergodicity is that a sequence is ergodic if it visits all of ${\displaystyle X}$; such sequences are "typical" for the process. Another is that its statistical properties can be deduced from a single, sufficiently long, random sample of the process (thus uniformly sampling all of ${\displaystyle X}$), or that any collection of random samples from a process must represent the average statistical properties of the entire process (that is, samples drawn uniformly from ${\displaystyle X}$ are representative of ${\displaystyle X}$ as a whole.) In the present example, a sequence of coin flips, where half are heads, and half are tails, is a "typical" sequence.

There are several important points to be made about the Bernoulli process. If one writes 0 for tails and 1 for heads, one gets the set of all infinite strings of binary digits. These correspond to the base-two expansion of real numbers. Explicitly, given a sequence ${\displaystyle (x_{1},x_{2},\cdots )}$, the corresponding real number is

${\displaystyle y=\sum _{n=1}^{\infty }{\frac {x_{n}}{2^{n}}}}$

The statement that the Bernoulli process is ergodic is equivalent to the statement that the real numbers are uniformly distributed. The set of all such strings can be written in a variety of ways: ${\displaystyle \{h,t\}^{\infty }=\{h,t\}^{\omega }=\{0,1\}^{\omega }=2^{\omega }=2^{\mathbb {N} }.}$ This set is the Cantor set, sometimes called the Cantor space to avoid confusion with the Cantor function

${\displaystyle C(x)=\sum _{n=1}^{\infty }{\frac {x_{n}}{3^{n}}}}$

In the end, these are all "the same thing".

The Cantor set plays key roles in many branches of mathematics. In recreational mathematics, it underpins the period-doubling fractals; in analysis, it appears in a vast variety of theorems. A key one for stochastic processes is the Wold decomposition, which states that any stationary process can be decomposed into a pair of uncorrelated processes, one deterministic, and the other being a moving average process.

The Ornstein isomorphism theorem states that every stationary stochastic process is equivalent to a Bernoulli scheme (a Bernoulli process with an N-sided (and possibly unfair) gaming die). Other results include that every non-dissipative ergodic system is equivalent to the Markov odometer, sometimes called an "adding machine" because it looks like elementary-school addition, that is, taking a base-N digit sequence, adding one, and propagating the carry bits. The proof of equivalence is very abstract; understanding the result is not: by adding one at each time step, every possible state of the odometer is visited, until it rolls over, and starts again. Likewise, ergodic systems visit each state, uniformly, moving on to the next, until they have all been visited.

Systems that generate (infinite) sequences of N letters are studied by means of symbolic dynamics. Important special cases include subshifts of finite type and sofic systems.

### Ergodicity in physics

Physical systems can be split into three categories: classical mechanics, which describes machines with a finite number of moving parts, quantum mechanics, which describes the structure of atoms, and statistical mechanics, which describes gases, liquids, solids; this includes condensed matter physics. The case of classical mechanics is discussed in the next section, on ergodicity in geometry. As to quantum mechanics, although there is a conception of quantum chaos, there is no clear definition of ergodocity; what this might be is hotly debated. This section reviews ergodicity in statistical mechanics.

The above abstract definition of a volume is required as the appropriate setting for definitions of ergodicity in physics. Consider a container of liquid, or gas, or plasma, or other collection of atoms or particles. Each and every particle ${\displaystyle x_{i}}$ has a 3D position, and a 3D velocity, and is thus described by six numbers: a point in six-dimensional space ${\displaystyle \mathbb {R} ^{6}.}$ If there are ${\displaystyle N}$ of these particles in the system, a complete description requires ${\displaystyle 6N}$ numbers. Any one system is just a single point in ${\displaystyle \mathbb {R} ^{6N}.}$ The physical system is not all of ${\displaystyle \mathbb {R} ^{6N}}$, of course; if it's a box of width, height and length ${\displaystyle W\times H\times L}$ then a point is in ${\displaystyle (W\times H\times L\times \mathbb {R} ^{3})^{N}.}$ Nor can velocities be infinite: they are scaled by some probability measure, for example the Boltzmann-Gibbs measure for a gas. None-the-less, for ${\displaystyle N}$ close to Avogadro's number, this is obviously a very large space. This space is called the canonical ensemble.

A physical system is said to be ergodic if any representative point of the system eventually comes to visit the entire volume of the system. For the above example, this implies that any given atom not only visits every part of the box ${\displaystyle W\times H\times L}$ with uniform probability, but it does so with every possible velocity, with probability given by the Boltzmann distribution for that velocity (so, uniform with respect to that measure). The ergodic hypothesis states that physical systems actually are ergodic. Multiple time scales are at work: gasses and liquids appear to be ergodic over short time scales. Ergodicity in a solid can be viewed in terms of the vibrational modes or phonons, as obviously the atoms in a solid do not exchange locations. Glasses present a challenge to the ergodic hypothesis; time scales are assumed to be in the millions of years, but results are contentious. Spin glasses present particular difficulties.

Formal mathematical proofs of ergodicity in statistical physics are hard to come by; most high-dimensional many-body systems are assumed to be ergodic, without mathematical proof. Exceptions include the dynamical billiards, which model billiard ball-type collisions of atoms in an ideal gas or plasma. The first hard-sphere ergodicity theorem was for Sinai's billiards, which considers two balls, one of them taken as being stationary, at the origin. As the second ball collides, it moves away; applying periodic boundary conditions, it then returns to collide again. By appeal to homogeneity, this return of the "second" ball can instead be taken to be "just some other atom" that has come into range, and is moving to collide with the atom at the origin (which can be taken to be just "any other atom".) This is one of the few formal proofs that exist; there are no equivalent statements e.g. for atoms in a liquid, interacting via van der Waals forces, even if it would be common sense to believe that such systems are ergodic (and mixing). More precise physical arguments can be made, though.

### Ergodicity in geometry

Ergodicity is a wide-spread phenomenon in the study of Riemannian manifolds. A quick sequence of examples, from simple to complicated, illustrates this point. All of the systems mentioned below have been proved to be ergodic via rigorous formal proofs. The irrational rotation of a circle is ergodic: the orbit of a point is such that eventually, every other point in the circle is visited. Such rotations are a special case of the interval exchange map. The beta expansions of a number are ergodic: beta expansions of a real number are done not in base-N, but in base-${\displaystyle \beta }$ for some ${\displaystyle \beta .}$ The reflected version of the beta expansion is tent map; there are a variety of other ergodic maps of the unit interval. Moving to two dimensions, the arithmetic billiards with irrational angles are ergodic. One can also take a flat rectangle, squash it, cut it and reassemble it; this is the previously-mentioned baker's map. Its points can be described by the set of bi-infinite strings in two letters, that is, extending to both the left and right; as such, it looks like two copies of the Bernoulli process. If one deforms sideways during the squashing, one obtains Arnold's cat map. In most ways, the cat map is prototypical of any other similar transformation.

For non-flat surfaces, one has that the geodesic flow of any negatively curved compact Riemann surface is ergodic. A surface is "compact" in the sense that it has finite surface area. The geodesic flow is a generalization of the idea of moving in a "straight line" on a curved surface: such straight lines are geodesics. One of the earliest cases studied is Hadamard's billiards, which describes geodesics on the Bolza surface, topologically equivalent to a donut with two holes. Ergodicity can be demonstrated informally, if one has a sharpie and some reasonable example of a two-holed donut: starting anywhere, in any direction, one attempts to draw a straight line; rulers are useful for this. It doesn't take all that long to discover that one is not coming back to the starting point. (Of course, crooked drawing can also account for this; that's why we have proofs.)

These results extend to higher dimensions. The geodesic flow for negatively curved compact Riemannian manifolds is ergodic. A classic example for this is the Anosov flow, which is the horocycle flow on a hyperbolic manifold. This can be seen to be a kind of Hopf fibration. Such flows commonly occur in classical mechanics, which is the study in physics of finite-dimensional moving machinery, e.g. the double pendulum and so-forth. Classical mechanics is constructed on symplectic manifolds. The flows on such systems can be deconstructed into stable and unstable manifolds; as a general rule, when this is possible, chaotic motion results. That this is generic can be seen by noting that the cotangent bundle of a Riemannian manifold is (always) a symplectic manifold; the geodesic flow is given by a solution to the Hamilton-Jacobi equations for this manifold. In terms of the canonical coordinates ${\displaystyle (q,p)}$ on the cotangent manifold, the Hamiltonian or energy is given by

${\displaystyle H={\tfrac {1}{2}}\sum _{ij}g^{ij}(q)p_{i}p_{j}}$

with ${\displaystyle g^{ij}}$ the (inverse of the) metric tensor and ${\displaystyle p_{i}}$ the momentum. The resemblance to the kinetic energy ${\displaystyle E={\tfrac {1}{2}}mv^{2}}$ of a point particle is hardly accidental; this is the whole point of calling such things "energy". In this sense, chaotic behavior with ergodic orbits is a more-or-less generic phenomenon in large tracts of geometry.

Ergodicity results have been provided in translation surfaces, hyperbolic groups and systolic geometry. Techniques include the study of ergodic flows, the Hopf decomposition, and the Ambrose-Kakutani-Krengel-Kubo theorem. An important class of systems are the Axiom A systems.

A number of both classification and "anti-classification" results have been obtained. The Ornstein isomorphism theorem applies here as well; again, it states that most of these systems are isomorphic to some Bernoulli scheme. This rather neatly ties these systems back into the definition of ergodicity given for a stochastic process, in the previous section. The anti-classification results state that there are more than a countably infinite number of inequivalent ergodic measure-preserving dynamical systems. This is perhaps not entirely a surprise, as one can use points in the Cantor set to construct similar-but-different systems. See measure-preserving dynamical system for a brief survey of some of the anti-classification results.

## Definition for discrete-time systems

### Formal definition

Let ${\displaystyle (X,{\mathcal {B}})}$ be a measurable space. If ${\displaystyle T}$ is a measurable function from ${\displaystyle X}$ to itself and ${\displaystyle \mu }$ a probability measure on ${\displaystyle (X,{\mathcal {B}})}$ then we say that ${\displaystyle T}$ is ${\displaystyle \mu }$-ergodic or ${\displaystyle \mu }$ is an ergodic measure for ${\displaystyle T}$ if ${\displaystyle T}$ preserves ${\displaystyle \mu }$ and the following condition holds:

For any ${\displaystyle A\in {\mathcal {B}}}$ such that ${\displaystyle T^{-1}(A)\subset A}$ either ${\displaystyle \mu (A)=0}$ or ${\displaystyle \mu (A)=1}$.

In other words there are no ${\displaystyle T}$-invariant subsets up to measure 0 (with respect to ${\displaystyle \mu }$). Recall that ${\displaystyle T}$ preserving ${\displaystyle \mu }$ (or ${\displaystyle \mu }$ being ${\displaystyle T}$-invariant) means that ${\displaystyle \mu (T^{-1}(A))=\mu (A)}$ for all ${\displaystyle A\in {\mathcal {B}}}$ (see also Measure-preserving dynamical system).

### Examples

The simplest example is when ${\displaystyle X}$ is a finite set and ${\displaystyle \mu }$ the counting measure. Then a self-map of ${\displaystyle X}$ preserves ${\displaystyle \mu }$ if and only if it is a bijection, and it is ergodic if and only if ${\displaystyle T}$ has only one orbit (that is, for every ${\displaystyle x,y\in X}$ there exists ${\displaystyle k\in \mathbb {N} }$ such that ${\displaystyle y=T^{k}(x)}$). For example, if ${\displaystyle X=\{1,2,\ldots ,n\}}$ then the cycle ${\displaystyle (1\,2\,\cdots \,n)}$ is ergodic, but the permutation ${\displaystyle (1\,2)(3\,4\,\cdots n)}$ is not (it has the two invariant subsets ${\displaystyle \{1,2\}}$ and ${\displaystyle \{3,4,\ldots ,n\}}$).

### Equivalent formulations

The definition given above admits the following immediate reformulations:

• for every ${\displaystyle A\in {\mathcal {B}}}$ with ${\displaystyle \mu (T^{-1}(A)\bigtriangleup A)=0}$ we have ${\displaystyle \mu (A)=0}$ or ${\displaystyle \mu (A)=1\,}$ (where ${\displaystyle \bigtriangleup }$ denotes the symmetric difference);
• for every ${\displaystyle A\in {\mathcal {B}}}$ with positive measure we have ${\displaystyle \mu \left(\bigcup _{n=1}^{\infty }T^{-n}(A)\right)=1}$;
• for every two sets ${\displaystyle A,B\in {\mathcal {B}}}$ of positive measure, there exists ${\displaystyle n>0}$ such that ${\displaystyle \mu ((T^{-n}(A))\cap B)>0}$;
• Every measurable function ${\displaystyle f:X\to \mathbb {R} }$ with ${\displaystyle f\circ T=f}$ is constant on a subset of full measure.

Importantly for applications, the condition in the last characterisation can be restricted to square-integrable functions only:

• If ${\displaystyle f\in L^{2}(X,\mu )}$ and ${\displaystyle f\circ T=f}$ then ${\displaystyle f}$ is constant almost everywhere.

### Further examples

#### Bernoulli shifts and subshifts

Let ${\displaystyle S}$ be a finite set and ${\displaystyle X=S^{\mathbb {Z} }}$ with ${\displaystyle \mu }$ the product measure (each factor ${\displaystyle S}$ being endowed with its counting measure). Then the shift operator ${\displaystyle T}$ defined by ${\displaystyle T\left((s_{k})_{k\in \mathbb {Z} })\right)=(s_{k+1})_{k\in \mathbb {Z} }}$ is ${\displaystyle \mu }$-ergodic.[1]

There are many more ergodic measures for the shift map ${\displaystyle T}$ on ${\displaystyle X}$. Periodic sequences give finitely supported measures. More interestingly, there are infinitely-supported ones which are subshifts of finite type.

#### Irrational rotations

Let ${\displaystyle X}$ be the unit circle ${\displaystyle \{z\in \mathbb {C} ,\,|z|=1\}}$, with its Lebesgue measure ${\displaystyle \mu }$. For any ${\displaystyle \theta \in \mathbb {R} }$ the rotation of ${\displaystyle X}$ of angle ${\displaystyle \theta }$ is given by ${\displaystyle R_{\theta }(z)=e^{2i\pi \theta }z}$. If ${\displaystyle \theta \in \mathbb {Q} }$ then ${\displaystyle T_{\theta }}$ is not ergodic for the Lebesgue measure as it has infinitely many finite orbits. On the other hand, if ${\displaystyle \theta }$ is irrational then ${\displaystyle T_{\theta }}$ is ergodic.[2]

#### Arnold's cat map

Let ${\displaystyle X=\mathbb {R} ^{2}/\mathbb {Z} ^{2}}$ be the 2-torus. Then any element ${\displaystyle g\in \mathrm {SL} _{2}(\mathbb {Z} )}$ defines a self-map of ${\displaystyle X}$ since ${\displaystyle g(\mathbb {Z} ^{2})=\mathbb {Z} ^{2}}$. When ${\displaystyle g=\left({\begin{array}{cc}2&1\\1&1\end{array}}\right)}$ one obtains the so-called Arnold's cat map, which is ergodic for the Lebesgue measure on the torus.

### Ergodic theorems

If ${\displaystyle \mu }$ is a probability measure on a space ${\displaystyle X}$ which is ergodic for a transformation ${\displaystyle T}$ the pointwise ergodic theorem of G. Birkhoff states that for every measurable functions ${\displaystyle f:X\to \mathbb {R} }$ and for ${\displaystyle \mu }$-almost every point ${\displaystyle x\in X}$ the time average on the orbit of ${\displaystyle x}$ converges to the space average of ${\displaystyle f}$. Formally this means that

${\displaystyle \lim _{k\to +\infty }\left({\frac {1}{k+1}}\sum _{i=0}^{k}f(T^{i}(x))\right)=\int _{X}fd\mu .}$

The mean ergodic theorem of J. von Neumann is a similar, weaker statement about averaged translates of square-integrable functions.

### Related properties

#### Dense orbits

An immediate consequence of the definition of ergodicity is that on a topological space ${\displaystyle X}$, and if ${\displaystyle {\mathcal {B}}}$ is the ?-algebra of Borel sets, if ${\displaystyle T}$ is ${\displaystyle \mu }$-ergodic then ${\displaystyle \mu }$-almost every orbit of ${\displaystyle T}$ is dense in the support of ${\displaystyle \mu }$.

This is not an equivalence since for a transformation which is not uniquely ergodic, but for which there is an ergodic measure with full support ${\displaystyle \mu _{0}}$, for any other ergodic measure ${\displaystyle \mu _{1}}$ the measure ${\textstyle {\frac {1}{2}}(\mu _{0}+\mu _{1})}$ is not ergodic for ${\displaystyle T}$ but its orbits are dense in the support. Explicit examples can be constructed with shift-invariant measures.[3]

#### Mixing

A transformation ${\displaystyle T}$ of a probability measure space ${\displaystyle (X,\mu )}$ is said to be mixing for the measure ${\displaystyle \mu }$ if for any measurable sets ${\displaystyle A,B\subset X}$ the following holds:

${\displaystyle \lim _{n\to +\infty }\mu (T^{-n}A\cap B)=\mu (A)\mu (B)}$
It is immediate that a mixing transformation is also ergodic (taking ${\displaystyle A}$ to be a ${\displaystyle T}$-stable subset and ${\displaystyle B}$ its complement). The converse is not true, for example a rotation with irrational angle on the circle (which is ergodic per the examples above) is not mixing (for a sufficiently small interval its successive images will not intersect itself most of the time). Bernoulli shifts are mixing, and so is Arnold's cat map.

This notion of mixing is sometimes called strong mixing, as opposed to weak mixing which means that

${\displaystyle \lim _{n\to +\infty }{\frac {1}{n}}\sum _{k=1}^{n}\left|\mu (T^{-n}A\cap B)-\mu (A)\mu (B)\right|=0}$

#### Proper ergodicity

The transformation ${\displaystyle T}$ is said to be properly ergodic if it does not have an orbit of full measure. In the discrete case this means that the measure ${\displaystyle \mu }$ is not supported on a finite orbit of ${\displaystyle T}$.

## Definition for continuous-time dynamical systems

The definition is essentially the same for continuous-time dynamical systems as for a single transformation. Let ${\displaystyle (X,{\mathcal {B}})}$ be a measurable space and for each ${\displaystyle t\in \mathbb {R} _{+}}$, then such a system is given by a family ${\displaystyle T_{t}}$ of measurable functions from ${\displaystyle X}$ to itself, so that for any ${\displaystyle t,s\in \mathbb {R} _{+}}$ the relation ${\displaystyle T_{s+t}=T_{s}\circ T_{t}}$ holds (usually it is also asked that the orbit map from ${\displaystyle \mathbb {R} _{+}\times X\to X}$ is also measurable). If ${\displaystyle \mu }$ is a probability measure on ${\displaystyle (X,{\mathcal {B}})}$ then we say that ${\displaystyle T_{t}}$ is ${\displaystyle \mu }$-ergodic or ${\displaystyle \mu }$ is an ergodic measure for ${\displaystyle T}$ if each ${\displaystyle T_{t}}$ preserves ${\displaystyle \mu }$ and the following condition holds:

For any ${\displaystyle A\in {\mathcal {B}}}$, if for all ${\displaystyle t\in \mathbb {R} _{+}}$ we have ${\displaystyle T_{t}^{-1}(A)\subset A}$ then either ${\displaystyle \mu (A)=0}$ or ${\displaystyle \mu (A)=1}$.

### Examples

As in the discrete case the simplest example is that of a transitive action, for instance the action on the circle given by ${\displaystyle T_{t}(z)=e^{2i\pi t}z}$ is ergodic for Lebesgue measure.

An example with infinitely many orbits is given by the flow along an irrational slope on the torus: let ${\displaystyle X=\mathbb {S} ^{1}\times \mathbb {S} ^{1}}$ and ${\displaystyle \alpha \in \mathbb {R} }$. Let ${\displaystyle T_{t}(z_{1},z_{2})=(e^{2i\pi t}z_{1},e^{2\alpha i\pi t}z_{2})}$; then if ${\displaystyle \alpha \not \in \mathbb {Q} }$ this is ergodic for the Lebesgue measure.

### Ergodic flows

Further examples of ergodic flows are:

• Billiards in convex Euclidean domains;
• the geodesic flow of a negatively curved Riemannian manifold of finite volume is ergodic (for the normalised volume measure);
• the horocycle flow on a hyperbolic manifold of finite volume is ergodic (for the normalised volume measure)

## Ergodicity in compact metric spaces

If ${\displaystyle X}$ is a compact metric space it is naturally endowed with the ?-algebra of Borel sets. The additional structure coming from the topology then allows a much more detailed theory for ergodic transformations and measures on ${\displaystyle X}$.

### Functional analysis interpretation

A very powerful alternate definition of ergodic measures can be given using the theory of Banach spaces. Radon measures on ${\displaystyle X}$ form a Banach space of which the set ${\displaystyle {\mathcal {P}}(X)}$ of probability measures on ${\displaystyle X}$ is a convex subset. Given a continuous transformation ${\displaystyle T}$ of ${\displaystyle X}$ the subset ${\displaystyle {\mathcal {P}}(X)^{T}}$ of ${\displaystyle T}$-invariant measures is a closed convex subset, and a measure is ergodic for ${\displaystyle T}$ if and only if it is an extreme point of this convex.[4]

#### Existence of ergodic measures

In the setting above it follows from the Banach-Alaoglu theorem that there always exists extremal points in ${\displaystyle {\mathcal {P}}(X)^{T}}$. Hence a transformation of a compact metric space always admits ergodic measures.

#### Ergodic decomposition

In general an invariant measure need not be ergodic, but as a consequence of Choquet theory it can always be expressed as the barycenter of a probability measure on the set of ergodic measures. This is referred to as the ergodic decomposition of the measure.[5]

#### Example

In the case of ${\displaystyle X=\{1,\ldots ,n\}}$ and ${\displaystyle T=(1\,2)(3\,4\,\cdots n)}$ the counting measure is not ergodic. The ergodic measures for ${\displaystyle T}$ are the uniform measures ${\displaystyle \mu _{1},\mu _{2}}$ supported on the subsets ${\displaystyle \{1,2\}}$ and ${\displaystyle \{3,\ldots ,n\}}$ and every ${\displaystyle T}$-invariant probability measure can be written in the form ${\displaystyle t\mu _{1}+(1-t)\mu _{2}}$ for some ${\displaystyle t\in [0,1]}$. In particular ${\textstyle {\frac {2}{n}}\mu _{1}+{\frac {n-2}{n}}\mu _{2}}$ is the ergodic decomposition of the counting measure.

#### Continuous systems

Everything in this section transfers verbatim to continuous actions of ${\displaystyle \mathbb {R} }$ or ${\displaystyle \mathbb {R} _{+}}$ on compact metric spaces.

### Unique ergodicity

The transformation ${\displaystyle T}$ is said to be uniquely ergodic if there is a unique Borel probability measure ${\displaystyle \mu }$ on ${\displaystyle X}$ which is ergodic for ${\displaystyle T}$.

In the examples considered above, irrational rotations of the circle are uniquely ergodic;[6] shift maps are not.

## Probabilistic interpretation: ergodic processes

If ${\displaystyle (X_{n})_{n\geq 1}}$ is a discrete-time stochastic process on a space ${\displaystyle \Omega }$, it is said to be ergodic if the joint distribution of the variables on ${\displaystyle \Omega ^{\mathbb {N} }}$ is invariant under the shift map ${\displaystyle (x_{n})_{n\geq 1}\mapsto (x_{n+1})_{n\geq 1}}$. This is a particular case of the notions discussed above.

The simplest case is that of an independent and identically distributed process which corresponds to the shift map described above. Another important case is that of a Markov chain which is discussed in detail below.

A similar interpretation holds for continuous-time stochastic processes though the construction of the measurable structure of the action is more complicated.

## Ergodicity of Markov chains

### The dynamical system associated with a Markov chain

Let ${\displaystyle S}$ be a finite set. A Markov chain on ${\displaystyle S}$ is defined by a matrix ${\displaystyle P\in [0,1]^{S\times S}}$, where ${\displaystyle P(s_{1},s_{2})}$ is the transition probability from ${\displaystyle s_{1}}$ to ${\displaystyle s_{2}}$, so ${\displaystyle \sum _{s'\in s}P(s,s')=1}$. A stationary measure for ${\displaystyle P}$ is a probability measure ${\displaystyle \nu }$ on ${\displaystyle S}$ such that ${\displaystyle \nu P=\nu }$ ; that is ${\displaystyle \sum _{s'\in S}\nu (s')P(s',s)=\nu (s)}$ for all ${\displaystyle s\in S}$.

Using this data we can define a probability measure ${\displaystyle \mu _{\nu }}$ on the set ${\displaystyle X=S^{\mathbb {Z} }}$ with its product ?-algebra by giving the measures of the cylinders as follows:

${\displaystyle \mu _{\nu }(\cdots \times S\times \{(s_{n},\ldots ,s_{m})\}\times S\times \cdots )=\nu (s_{n})P(s_{n},s_{n+1})\cdots P(s_{m-1},s_{m}).}$
Stationarity of ${\displaystyle \nu }$ then means that the measure ${\displaystyle \mu _{\nu }}$ is invariant under the shift map ${\displaystyle T\left((s_{k})_{k\in \mathbb {Z} })\right)=(s_{k+1})_{k\in \mathbb {Z} }}$.

### Criterion for ergodicity

The measure ${\displaystyle \mu _{\nu }}$ is always ergodic for the shift map if the associated Markov chain is irreducible (any state can be reached with positive probability from any other state in a finite number of steps).[7]

The hypotheses above imply that there is a unique stationary measure for the Markov chain. In terms of the matrix ${\displaystyle P}$ a sufficient condition for this is that 1 be a simple eigenvalue of the matrix ${\displaystyle P}$ and all other eigenvalues of ${\displaystyle P}$ (in ${\displaystyle \mathbb {C} }$) are of modulus <1.

Note that in probability theory the Markov chain is called ergodic if in addition each state is aperiodic (the times where the return probability is positive are not multiples of a single integer >1). This is not necessary for the invariant measure to be ergodic; hence the notions of "ergodicity" for a Markov chain and the associated shift-invariant measure are different (the one for the chain is strictly stronger).[8]

Moreover the criterion is an "if and only if" if all communicating classes in the chain are recurrent and we consider all stationary measures.

### Examples

#### Counting measure

If ${\displaystyle P(s,s')=1/|S|}$ for all ${\displaystyle s,s'\in S}$ then the stationary measure is the counting measure, the measure ${\displaystyle \mu _{P}}$ is the product of counting measures. The Markov chain is ergodic, so the shift example from above is a special case of the criterion.

#### Non-ergodic Markov chains

Markov chains with recurring communicating classes are not irreducible are not ergodic, and this can be seen immediately as follows. If ${\displaystyle S_{1}\subsetneq S}$ are two distinct recurrent communicating classes there are nonzero stationary measures ${\displaystyle \nu _{1},\nu _{2}}$ supported on ${\displaystyle S_{1},S_{2}}$ respectively and the subsets ${\displaystyle S_{1}^{\mathbb {Z} }}$ and ${\displaystyle S_{2}^{\mathbb {Z} }}$ are both shift-invariant and of measure 1.2 for the invariant probability measure ${\displaystyle {\frac {1}{2}}(\nu _{1}+\nu _{2})}$. A very simple example of that is the chain on ${\displaystyle S=\{1,2\}}$ given by the matrix ${\textstyle \left({\begin{array}{cc}1&0\\0&1\end{array}}\right)}$ (both states are stationary).

#### A periodic chain

The Markov chain on ${\displaystyle S=\{1,2\}}$ given by the matrix ${\textstyle \left({\begin{array}{cc}0&1\\1&0\end{array}}\right)}$ is irreducible but periodic. Thus it is not ergodic in the sense of Markov chain though the associated measure ${\displaystyle \mu }$ on ${\displaystyle \{1,2\}^{\mathbb {Z} }}$ is ergodic for the shift map. However the shift is not mixing for this measure, as for the sets

${\displaystyle A=\cdots \times \{1,2\}\times 1\times \{1,2\}\times 1\times \{1,2\}\cdots }$
and
${\displaystyle B=\cdots \times \{1,2\}\times 2\times \{1,2\}\times 2\times \{1,2\}\cdots }$
we have ${\displaystyle \mu (A)=1/2=\mu (B)}$ but
${\displaystyle \mu (T^{-n}A\cap B)={\begin{cases}1/2{\text{ if }}n{\text{ is odd}}\\0{\text{ if }}n{\text{ is even.}}\end{cases}}}$

## Generalisations

### Ergodic group actions

The definition of ergodicity also makes sense for group actions. The classical theory (for invertible transformations) corresponds to actions of ${\displaystyle \mathbb {Z} }$ or ${\displaystyle \mathbb {R} }$.

### Quasi-invariant measures

For non-abelian groups there might not be invariant measures even on compact metric spaces. However the definition of ergodicity carries over unchanged if one replaces invariant measures by quasi-invariant measures.

Important examples are the action of a semisimple Lie group (or a lattice therein) on its Furstenberg boundary.

### Ergodic relations

A measurable equivalence relation it is said to be ergodic if all saturated subsets are either null or conull.

## Historical development

The idea of ergodicity was born in the field of thermodynamics, where it was necessary to relate the individual states of gas molecules to the temperature of a gas as a whole and its time evolution thereof. In order to do this, it was necessary to state what exactly it means for gases to mix well together, so that thermodynamic equilibrium could be defined with mathematical rigor. Once the theory was well developed in physics, it was rapidly formalized and extended, so that ergodic theory has long been an independent area of mathematics in itself. As part of that progression, more than one slightly different definition of ergodicity and multitudes of interpretations of the concept in different fields coexist.

For example, in classical physics the term implies that a system satisfies the ergodic hypothesis of thermodynamics,[9] the relevant state space being position and momentum space. In dynamical systems theory the state space is usually taken to be a more general phase space. On the other hand in coding theory the state space is often discrete in both time and state, with less concomitant structure. In all those fields the ideas of time average and ensemble average can also carry extra baggage as well—as is the case with the many possible thermodynamically relevant partition functions used to define ensemble averages in physics, back again. As such the measure theoretic formalization of the concept also serves as a unifying discipline.

## Etymology

The term ergodic is commonly thought to derive from the Greek words (ergon: "work") and ? (hodos: "path", "way"), as chosen by Ludwig Boltzmann while he was working on a problem in statistical mechanics.[10] At the same time it is also claimed to be a be a derivation of ergomonode, coined by Boltzmann in a relatively obscure paper from 1884. The etymology appears to be contested in other ways as well.[11]

## Notes

1. ^ Walters 1982, p. 32.
2. ^ Walters 1982, p. 29.
3. ^ "Example of a measure-preserving system with dense orbits that is not ergodic". MathOverflow. September 1, 2011. Retrieved 2020.
4. ^ Walters 1982, p. 152.
5. ^ Walters 1982, p. 153.
6. ^ Walters 1982, p. 159.
7. ^ Walters 1982, p. 42.
8. ^ "Different uses of the word "ergodic"". MathOverflow. September 4, 2011. Retrieved 2020.
9. ^ Feller, William (1 August 2008). An Introduction to Probability Theory and Its Applications (2nd ed.). Wiley India Pvt. Limited. p. 271. ISBN 978-81-265-1806-7.
10. ^ Walters 1982, §0.1, p. 2
11. ^ Gallavotti, Giovanni (1995). "Ergodicity, ensembles, irreversibility in Boltzmann and beyond". Journal of Statistical Physics. 78 (5-6): 1571-1589. arXiv:chao-dyn/9403004. Bibcode:1995JSP....78.1571G. doi:10.1007/BF02180143. S2CID 17605281.