The study of Markov chains has surged in the last few decades, driven by applications both in theoretical mathematics and computer science and in applied areas such as statistical physics, mathematical biology, economics and statistics. Nowadays, Markov chains are considered to be one of the most important objects in probability theory.
A Markov chain is a stochastic process with the property that, conditioned on its present state, its future states are independent of the past states. Under mild assumptions, a Markov chain on a finite state space converges to a unique stationary distribution, and of particular importance is the rate of this convergence: the mixing time of a chain is the number of steps needed for its distribution to get reasonably close to its limit. In the early 1980's, pioneering works of Aldous and of Diaconis brought the concept of mixing times to a wider audience, using card shuffling as a central example. Since then, both the field and its interactions with computer science and statistical physics have grown tremendously. By now there are many methods for analyzing the mixing time of a Markov chain, ranging from coupling techniques to chain comparisons to log-Sobolev inequalities, to name a few. We will survey these methods and highlight their applications, with a special emphasis on Markov chains that model the evolution of classical interacting particle systems such as the Ising model.
Prerequisites: Basic experience in Probability Theory.