< prev | next >

Markov Chains

Markov chain is a sequence of possible state change events in which the probability of each event depends only on the state attained in the previous event. These functions are named after Andrey Markov (1856-1922).

Markov chains are mathematical models used to represent systems that transition between different states over time. Markov chains provide a powerful framework for analyzing and predicting the behavior of systems that evolve over time, especially when the future state depends only on the current state.

Key Aspects

State Space

  • A set of all possible states the system can be in.

  • Can be finite or infinite, discrete or continuous.

Markov Property

  • The defining characteristic of Markov chains.

  • The probability of transitioning to a future state depends only on the current state, not on the past states.

  • Often described as "memorylessness".

Transition Probabilities

  • The likelihood of moving from one state to another.

  • Represented in a transition matrix for discrete state spaces.

Time

  • Can be discrete (changes occur at fixed intervals) or continuous.

  • This report focuses primarily on discrete-time Markov chains.

Transition Matrix

  • A square matrix containing all transition probabilities between states.

  • For a finite state space with n states, it's an n x n matrix.

Stationary Distribution

  • The long-term probabilities of being in each state.

  • Reached when the chain runs for a long time, regardless of the starting state.

Irreducibility

  • A chain is irreducible if it's possible to reach any state from any other state.

Periodicity

  • A state's period is the greatest common divisor of the number of steps it takes to return to that state.

  • A state is aperiodic if its period is 1.

Absorbing States

  • States that, once entered, cannot be left.

Applications

  • Used in various fields including finance, physics, biology, and computer science.

  • Examples include modeling stock prices, gene sequences, and web page rankings.

A Diagram

In the diagram and values below, going from state S1 to the next state has four possibilities with four associated probabilities: