What Happens at infinity?
How to solve Markov Chains Using Python
Computing the steady-state behavior of a Markov chain using Python
4 min readAug 30, 2022
Markov Chains Refresher:
- A Markov chain is a discrete-time discrete-valued random process that follows that Markov property.
- Mathematically, a Markov chain is denoted as
Where for each time instant n, the process takes a value from a discrete set defined by
- Given a Markov chain, the Markov property states that the probability distribution of the next state (future) depends only on the probability distribution of the current state.
Mathematically,
Lets us consider a simple 2-state Markov chain as follows:
The visual explanation of a Markov Chain by Victor Powell and Lewis Lehe is the best I have come across until now. It can be seen that the Markov chain can transition from a given state to another state (including itself) with a probability of 0.5.
State Space Transition Matrix (aka Transition Matrix):
- A state-space transition matrix is a n x n square matrix that describes the stochastic behavior of a Markov chain.