100% tevredenheidsgarantie Direct beschikbaar na betaling Zowel online als in PDF Je zit nergens aan vast
logo-home
Samenvatting Markov Processen (FEB22008) €6,99   In winkelwagen

Samenvatting

Samenvatting Markov Processen (FEB22008)

 23 keer bekeken  0 keer verkocht

Uitgebreide samenvatting van Markov Processen (econometrie EUR)

Voorbeeld 2 van de 15  pagina's

  • 7 september 2022
  • 15
  • 2020/2021
  • Samenvatting
Alle documenten voor dit vak (1)
avatar-seller
LeonVerweij
Week 1
Markov process
Given the present, the future does not depend on the past
Law of total expectation (tower property)
𝐸(𝑋) = 𝐸&𝐸(𝑋|𝑌))
Stochastic process
A stochastic process is a collection of (infinitely many) random variables. Often describe the
behaviour of a system that evolves randomly in time. Define a stochastic process for a
discrete time in the form {𝑋! , 𝑛 ≥ 0}, and for a continuous time in the form {𝑋(𝑡), 𝑡 ≥ 0}.
The random variables take values in a certain state space 𝑆. 𝑆 can be discrete, 𝑆 is finite or
countable, or can be continuous, 𝑆 is uncountable
Markov chain
A stochastic process {𝑋! , 𝑛 ≥ 0} with state space 𝑆 is called a discrete time Markov chain if,
for all states 𝑖, 𝑗, 𝑠" , … , 𝑠!#$ ∈ 𝑆,
𝑃(𝑋!%$ = 𝑗| 𝑋! = 𝑖, 𝑋!#$ = 𝑠!#$ , … , 𝑋" = 𝑠" ) = 𝑃(𝑋!%$ = 𝑗| 𝑋! = 𝑖). The conditional
probabilities 𝑃(𝑋!%$ = 𝑗| 𝑋! = 𝑖) are called (1-step) transition probabilities from state 𝑖 to 𝑗
Time homogeneous Markov chain
In most applications we have, for all 𝑛, 𝑃(𝑋!%$ = 𝑗| 𝑋! = 𝑖) = 𝑃(𝑋$ = 𝑗|𝑋" = 𝑖) = 𝑃&' . In
this case, the 1-step transition probabilities are said to be time homogeneous. That is, they
do not depend on 𝑛
Transition probability matrix
𝑃"" 𝑃"$ 𝑃"( ⋯
⎡𝑃 𝑃$$ 𝑃$( ⋯⎤
⎢ $" ⎥
The matrix 𝑃 = ⎢ ⋮ ⋮ ⋮ ⋮ ⎥ is called the (1-step) transition probability matrix. All
⎢ 𝑃&" 𝑃&$ 𝑃&( ⋯⎥
⎣ ⋮ ⋮ ⋮ ⋮⎦
the entries in the matrix need to be positive, 𝑃&' ≥ 0 for all 𝑖 and 𝑗, and the rows need to
sum up to 1, ∑'∈* 𝑃&' = 1 for all 𝑖. A matrix with these properties is called a stochastic matrix
Visualize the transition matrix
A transition diagram is a directed graph, visualizing a Markov chain. The nodes in the graph
represent the states, the arcs go from 𝑖 to 𝑗 is 𝑃&' > 0, and the value next to the arc is 𝑃&'
𝑛-step transitions probabilities
Probabilities 𝑃(𝑋! = 𝑗|𝑋" = 𝑖) are called 𝑛-step transition probabilities. Denote
𝑃(𝑋! = 𝑗|𝑋" = 𝑖) by 𝑃&'!
Chapman-Kolmogorov equations
An efficient solution to compute 𝑛-step transition probabilities is with the Chapman-
(!%-) (!) (-)
Kolmogorov equations: 𝑃&,' = ∑/∈* 𝑃&,/ 𝑃/,' . If we denote the 𝑛-step transition matrix
by 𝑃(!) , then we can rewrite the Chapman-Kolmogorov equations in the form of matrix
production: 𝑃(!%-) = 𝑃(!) 𝑃 (-) . For a time homogeneous Markov chain, it thus holds that
we can obtain the 𝑛-step transistion matrix 𝑃(!) = 𝑃!
Unconditional distribution of 𝑋!
Let 𝛼 (") denote the initial distribution, so the vector with elements 𝑃(𝑋" = 𝑖) with 𝑖 ∈ 𝑆.
Let 𝛼 (!) denote the distribution at time 𝑛, so the vector with elements 𝑃(𝑋! = 𝑖) with 𝑖 ∈ 𝑆
Then, by conditioning on the initial state we have
(")
𝑃(𝑋! = 𝑗) = ∑&∈* 𝑃(𝑋! = 𝑗|𝑋" = 𝑖)𝑃(𝑋" = 𝑖) = ∑&∈* 𝑃&'! 𝛼& . Thus, with the initial
distribution 𝛼 (") and the transition probability matrix 𝑃, one can calculate the distribution at
time 𝑛 by 𝛼 (!) = (𝑃! )0 𝛼 (") = (𝑃0 )! 𝛼 (")

, Absorbing state
Let {𝑋! , 𝑛 ≥ 0} be a Markov chain with transition probabilities 𝑃&' . Suppose that we
concentrate on a specific set of states 𝐴. We want to determine the probability that a
Markov chain never enters any of the states in 𝐴 by time 𝑛. We design states in 𝐴 as
absorbing states. Transform this Markov chain to new transition probabilities 𝑄&' defined by
1 if 𝑖 ∈ 𝐴, 𝑖 = 𝑗
𝑄&' = G0 if 𝑖 ∈ 𝐴, 𝑗 ≠ 𝑖 . Except the set 𝐴 is visited, the two Markov chains follow the
𝑃&' otherwise
!
identical transition probabilities. Therefore, for 𝑖, 𝑗 ∈ 𝑆\𝐴, 𝑄&' represents the probability
that the original chain, starting in 𝑖, will be in state 𝑗 at time 𝑛 without ever entering any one
of the states in 𝐴. You may collapse the states in 𝐴 into one state
Limiting distribution
Let 𝜋 denote the limiting distribution of the Markov chain, that is, the vector with elements
𝜋& = lim 𝑃(𝑋! = 𝑖) with 𝑖 ∈ 𝑆. There may exist no limiting distribution, several limiting
!→2
distributions or a unique limiting distribution

Week 2
Communication of states
𝑗 is called accessible from 𝑖 if 𝑃&'/ > 0 for some 𝑘 ≥ 0. States 𝑖 and 𝑗 are said to communicate
if they are accessible from each other, this is denoted by 𝑖 ↔ 𝑗. Communicating states form
a class: 𝑖 ↔ 𝑖; if 𝑖 ↔ 𝑗 then 𝑗 ↔ 𝑖; if 𝑖 ↔ 𝑗 and 𝑗 ↔ 𝑘 then 𝑖 ↔ 𝑘. If there is only one class,
MC is called to be irreducible, otherwise it is called reducible
Recurrent and transient
Let 𝑓& denote the probability that, starting in state 𝑖, the process will ever re-enter state 𝑖.
State 𝑖 is said to be recurrent if 𝑓& = 1, and transient if 𝑓& < 1
Visiting a recurrent state
If a Markov chain start in a recurrent state 𝑖, with probability 1, the MC visits state 𝑖
infinitely often
Visiting a transient state
Let the Markov chain start in a transient state 𝑖. The probability that the Markov chain visits
state 𝑖 exactly 𝑛 times is equal to 𝑓&!#$ (1 − 𝑓& ). The number of visits to state 𝑖 follows a
geometric distribution with finite mean (1 − 𝑓& )#$ . If starting from a transient state 𝑖, the
Markov chain visits state 𝑖 only a finite number of times, with probability 1
Necessary and sufficient condition for a transient state
Consider a transient state 𝑖. ∑2 !4" 𝐼3! 4& is the number of visits to state 𝑖. The expected
number of visits to state 𝑖 is 𝐸&∑2 !4" 𝐼3! 4& \𝑋" = 𝑖), if starting from state 𝑖. Because 𝑖 is
!
transient, 𝐸&∑!4" 𝐼3! 4& \𝑋" = 𝑖) is finite. And because 𝐸&∑2
2 2
!4" 𝐼3! 4& \𝑋" = 𝑖) = ∑!4" 𝑃&& ,
!
∑2 !4" 𝑃&& is finite ⟺ state 𝑖 transient is
Classification of states
Recurrence and transience are class properties. If state 𝑖 is recurrent and 𝑖 ↔ 𝑗, then state 𝑗
is also recurrent. If state 𝑖 is transient and 𝑖 ↔ 𝑗, then state 𝑗 is also transient.
A Markov chain can be decomposed in recurrent classes and transient classes
Recurrent and transient class for finite-state Markov chain
In a finite-state Markov chain, not all states are transient
All states of an irreducible finite-state Markov chain are recurrent
Decomposing recurrent states
Denote 𝑁' = min{𝑛 > 0, 𝑋! = 𝑗}. 𝑁' |𝑋" = 𝑗 is the time to come back. If 𝑗 is recurrent, it
means that 𝑃&𝑁' < ∞\𝑋" = 𝑗) = 1. A random variable not taking +∞ may still have an

Voordelen van het kopen van samenvattingen bij Stuvia op een rij:

Verzekerd van kwaliteit door reviews

Verzekerd van kwaliteit door reviews

Stuvia-klanten hebben meer dan 700.000 samenvattingen beoordeeld. Zo weet je zeker dat je de beste documenten koopt!

Snel en makkelijk kopen

Snel en makkelijk kopen

Je betaalt supersnel en eenmalig met iDeal, creditcard of Stuvia-tegoed voor de samenvatting. Zonder lidmaatschap.

Focus op de essentie

Focus op de essentie

Samenvattingen worden geschreven voor en door anderen. Daarom zijn de samenvattingen altijd betrouwbaar en actueel. Zo kom je snel tot de kern!

Veelgestelde vragen

Wat krijg ik als ik dit document koop?

Je krijgt een PDF, die direct beschikbaar is na je aankoop. Het gekochte document is altijd, overal en oneindig toegankelijk via je profiel.

Tevredenheidsgarantie: hoe werkt dat?

Onze tevredenheidsgarantie zorgt ervoor dat je altijd een studiedocument vindt dat goed bij je past. Je vult een formulier in en onze klantenservice regelt de rest.

Van wie koop ik deze samenvatting?

Stuvia is een marktplaats, je koop dit document dus niet van ons, maar van verkoper LeonVerweij. Stuvia faciliteert de betaling aan de verkoper.

Zit ik meteen vast aan een abonnement?

Nee, je koopt alleen deze samenvatting voor €6,99. Je zit daarna nergens aan vast.

Is Stuvia te vertrouwen?

4,6 sterren op Google & Trustpilot (+1000 reviews)

Afgelopen 30 dagen zijn er 73918 samenvattingen verkocht

Opgericht in 2010, al 14 jaar dé plek om samenvattingen te kopen

Start met verkopen
€6,99
  • (0)
  Kopen