100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
Markov Decision Processes Verified Solutions £7.75   Add to cart

Exam (elaborations)

Markov Decision Processes Verified Solutions

 0 view  0 purchase
  • Module
  • M-arko-v Decision Processes Verified Solution
  • Institution
  • M-arko-v Decision Processes Verified Solution

Markov Decision Processes Verified Solutions Markov decision processes ️️MDP - formally describe an environment for reinforcement learning - environment is fully observable - current state completely characterizes the process - Almost all RL problems can be formalised as MDP - optimal con...

[Show more]

Preview 2 out of 7  pages

  • October 30, 2024
  • 7
  • 2024/2025
  • Exam (elaborations)
  • Questions & answers
  • M-arko-v Decision Processes Verified Solution
  • M-arko-v Decision Processes Verified Solution
avatar-seller
Markov Decision Processes Verified Solutions

Markov decision processes ✔️✔️MDP - formally describe an environment for reinforcement learning

- environment is fully observable

- current state completely characterizes the process

- Almost all RL problems can be formalised as MDP

- optimal control primarily deals with continuous MDPs

- Partially observable problems can be converted into MDPs

- Bandits are MDPs with one state



Markov Property ✔️✔️- future is independent of the past given the present

-the state captures all relevant information from the history

- once the state is known the history can be thrown away

- the state is a sufficient statistic of the future



State transition Matrix ✔️✔️- markov state s and successor state s', the state transition probability

- state transition matrix P defines transition probabilities from all states s to all successor states s'



Markov Process ✔️✔️- markov process is a memoryless random process i.e, a sequence of random
states S1, S2... with the markov property

-Markov process (or Markov Chain) is a tuple <S,P>

- S is a (finite) set of states

- P is a state transition probability matrix



Markov reward process ✔️✔️- A markov reward process is a Markov Chain with values

- Markov reward process is a tuple <S,P,R,Y>

- S is a finite set of a states

- P is a state transition probability matrix

, - R is a reward function

-Y is a discount factor



Return ✔️✔️- Return Gt is the total discounted reward from time-step t

- the discount Y is the present value of future rewards

- value of receiving reward R after k+1 time-steps is Y^k R

- values immediate reward above delayed reward

- y lose to 0 leads to "myopic" evaluation

- y close to 1 leads to "far sighted" evaluation



Discount ✔️✔️- mathematically convenient to discount rewards

- Avoids infinite returns in cyclic Markov Processes

- Uncertainty about the future may not be fully represented

- if reward is financial, immediate rewards may earn more interest than delayed rewards

- animal/human behavior shows preference for immediate reward

- sometimes possible to use undiscounted Markov reward processes if all sequences terminate



Value Function ✔️✔️-Value function v(s) gives the long-term value of state s

- state value function v(s) of an MRP is the expected return starting from state s



Bellman Equation for MRPs ✔️✔️the value function can be decomposed into two parts:

- immediate reward Rt+1

- discounted value of successor state Yv(St+1)



Bellman Equation in Matrix Form ✔️✔️- Bellman equation can be expressed concisely using matrices,

v=R+yPv

v is a column vector with on entry per state

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller CertifiedGrades. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for £7.75. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

77254 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy revision notes and other study material for 14 years now

Start selling
£7.75
  • (0)
  Add to cart