Week 1
Markov process
Given the present, the future does not depend on the past
Law of total expectation (tower property)
𝐸(𝑋) = 𝐸&𝐸(𝑋|𝑌))
Stochastic process
A stochastic process is a collection of (infinitely many) random variables. Often describe the
behaviour of a system that evolves randomly in time. Define a stochastic process for a
discrete time in the form {𝑋! , 𝑛 ≥ 0}, and for a continuous time in the form {𝑋(𝑡), 𝑡 ≥ 0}.
The random variables take values in a certain state space 𝑆. 𝑆 can be discrete, 𝑆 is finite or
countable, or can be continuous, 𝑆 is uncountable
Markov chain
A stochastic process {𝑋! , 𝑛 ≥ 0} with state space 𝑆 is called a discrete time Markov chain if,
for all states 𝑖, 𝑗, 𝑠" , … , 𝑠!#$ ∈ 𝑆,
𝑃(𝑋!%$ = 𝑗| 𝑋! = 𝑖, 𝑋!#$ = 𝑠!#$ , … , 𝑋" = 𝑠" ) = 𝑃(𝑋!%$ = 𝑗| 𝑋! = 𝑖). The conditional
probabilities 𝑃(𝑋!%$ = 𝑗| 𝑋! = 𝑖) are called (1-step) transition probabilities from state 𝑖 to 𝑗
Time homogeneous Markov chain
In most applications we have, for all 𝑛, 𝑃(𝑋!%$ = 𝑗| 𝑋! = 𝑖) = 𝑃(𝑋$ = 𝑗|𝑋" = 𝑖) = 𝑃&' . In
this case, the 1-step transition probabilities are said to be time homogeneous. That is, they
do not depend on 𝑛
Transition probability matrix
𝑃"" 𝑃"$ 𝑃"( ⋯
⎡𝑃 𝑃$$ 𝑃$( ⋯⎤
⎢ $" ⎥
The matrix 𝑃 = ⎢ ⋮ ⋮ ⋮ ⋮ ⎥ is called the (1-step) transition probability matrix. All
⎢ 𝑃&" 𝑃&$ 𝑃&( ⋯⎥
⎣ ⋮ ⋮ ⋮ ⋮⎦
the entries in the matrix need to be positive, 𝑃&' ≥ 0 for all 𝑖 and 𝑗, and the rows need to
sum up to 1, ∑'∈* 𝑃&' = 1 for all 𝑖. A matrix with these properties is called a stochastic matrix
Visualize the transition matrix
A transition diagram is a directed graph, visualizing a Markov chain. The nodes in the graph
represent the states, the arcs go from 𝑖 to 𝑗 is 𝑃&' > 0, and the value next to the arc is 𝑃&'
𝑛-step transitions probabilities
Probabilities 𝑃(𝑋! = 𝑗|𝑋" = 𝑖) are called 𝑛-step transition probabilities. Denote
𝑃(𝑋! = 𝑗|𝑋" = 𝑖) by 𝑃&'!
Chapman-Kolmogorov equations
An efficient solution to compute 𝑛-step transition probabilities is with the Chapman-
(!%-) (!) (-)
Kolmogorov equations: 𝑃&,' = ∑/∈* 𝑃&,/ 𝑃/,' . If we denote the 𝑛-step transition matrix
by 𝑃(!) , then we can rewrite the Chapman-Kolmogorov equations in the form of matrix
production: 𝑃(!%-) = 𝑃(!) 𝑃 (-) . For a time homogeneous Markov chain, it thus holds that
we can obtain the 𝑛-step transistion matrix 𝑃(!) = 𝑃!
Unconditional distribution of 𝑋!
Let 𝛼 (") denote the initial distribution, so the vector with elements 𝑃(𝑋" = 𝑖) with 𝑖 ∈ 𝑆.
Let 𝛼 (!) denote the distribution at time 𝑛, so the vector with elements 𝑃(𝑋! = 𝑖) with 𝑖 ∈ 𝑆
Then, by conditioning on the initial state we have
(")
𝑃(𝑋! = 𝑗) = ∑&∈* 𝑃(𝑋! = 𝑗|𝑋" = 𝑖)𝑃(𝑋" = 𝑖) = ∑&∈* 𝑃&'! 𝛼& . Thus, with the initial
distribution 𝛼 (") and the transition probability matrix 𝑃, one can calculate the distribution at
time 𝑛 by 𝛼 (!) = (𝑃! )0 𝛼 (") = (𝑃0 )! 𝛼 (")
, Absorbing state
Let {𝑋! , 𝑛 ≥ 0} be a Markov chain with transition probabilities 𝑃&' . Suppose that we
concentrate on a specific set of states 𝐴. We want to determine the probability that a
Markov chain never enters any of the states in 𝐴 by time 𝑛. We design states in 𝐴 as
absorbing states. Transform this Markov chain to new transition probabilities 𝑄&' defined by
1 if 𝑖 ∈ 𝐴, 𝑖 = 𝑗
𝑄&' = G0 if 𝑖 ∈ 𝐴, 𝑗 ≠ 𝑖 . Except the set 𝐴 is visited, the two Markov chains follow the
𝑃&' otherwise
!
identical transition probabilities. Therefore, for 𝑖, 𝑗 ∈ 𝑆\𝐴, 𝑄&' represents the probability
that the original chain, starting in 𝑖, will be in state 𝑗 at time 𝑛 without ever entering any one
of the states in 𝐴. You may collapse the states in 𝐴 into one state
Limiting distribution
Let 𝜋 denote the limiting distribution of the Markov chain, that is, the vector with elements
𝜋& = lim 𝑃(𝑋! = 𝑖) with 𝑖 ∈ 𝑆. There may exist no limiting distribution, several limiting
!→2
distributions or a unique limiting distribution
Week 2
Communication of states
𝑗 is called accessible from 𝑖 if 𝑃&'/ > 0 for some 𝑘 ≥ 0. States 𝑖 and 𝑗 are said to communicate
if they are accessible from each other, this is denoted by 𝑖 ↔ 𝑗. Communicating states form
a class: 𝑖 ↔ 𝑖; if 𝑖 ↔ 𝑗 then 𝑗 ↔ 𝑖; if 𝑖 ↔ 𝑗 and 𝑗 ↔ 𝑘 then 𝑖 ↔ 𝑘. If there is only one class,
MC is called to be irreducible, otherwise it is called reducible
Recurrent and transient
Let 𝑓& denote the probability that, starting in state 𝑖, the process will ever re-enter state 𝑖.
State 𝑖 is said to be recurrent if 𝑓& = 1, and transient if 𝑓& < 1
Visiting a recurrent state
If a Markov chain start in a recurrent state 𝑖, with probability 1, the MC visits state 𝑖
infinitely often
Visiting a transient state
Let the Markov chain start in a transient state 𝑖. The probability that the Markov chain visits
state 𝑖 exactly 𝑛 times is equal to 𝑓&!#$ (1 − 𝑓& ). The number of visits to state 𝑖 follows a
geometric distribution with finite mean (1 − 𝑓& )#$ . If starting from a transient state 𝑖, the
Markov chain visits state 𝑖 only a finite number of times, with probability 1
Necessary and sufficient condition for a transient state
Consider a transient state 𝑖. ∑2 !4" 𝐼3! 4& is the number of visits to state 𝑖. The expected
number of visits to state 𝑖 is 𝐸&∑2 !4" 𝐼3! 4& \𝑋" = 𝑖), if starting from state 𝑖. Because 𝑖 is
!
transient, 𝐸&∑!4" 𝐼3! 4& \𝑋" = 𝑖) is finite. And because 𝐸&∑2
2 2
!4" 𝐼3! 4& \𝑋" = 𝑖) = ∑!4" 𝑃&& ,
!
∑2 !4" 𝑃&& is finite ⟺ state 𝑖 transient is
Classification of states
Recurrence and transience are class properties. If state 𝑖 is recurrent and 𝑖 ↔ 𝑗, then state 𝑗
is also recurrent. If state 𝑖 is transient and 𝑖 ↔ 𝑗, then state 𝑗 is also transient.
A Markov chain can be decomposed in recurrent classes and transient classes
Recurrent and transient class for finite-state Markov chain
In a finite-state Markov chain, not all states are transient
All states of an irreducible finite-state Markov chain are recurrent
Decomposing recurrent states
Denote 𝑁' = min{𝑛 > 0, 𝑋! = 𝑗}. 𝑁' |𝑋" = 𝑗 is the time to come back. If 𝑗 is recurrent, it
means that 𝑃&𝑁' < ∞\𝑋" = 𝑗) = 1. A random variable not taking +∞ may still have an
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller LeonVerweij. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $7.58. You're not tied to anything after your purchase.