100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached 4.2 TrustPilot
logo-home
Exam (elaborations)

Markov Decision Processes Finals V2

Rating
-
Sold
-
Pages
14
Grade
A
Uploaded on
30-10-2024
Written in
2024/2025

Markov Decision Processes Finals V2 A Markov Process is a process in which all states do not depend on previous actions. ️️True, Markov means that you don't have to condition on anything past the most recent state. A Markov Decision Process is a set of Markov Property Compliant states, with rewards and values. Decaying Reward encourages the agent to end the game quickly instead of running around and gathering more reward ️️True, as reward decays the total reward for the episode decreases, so the agent is encouraged to maximize total reward by ending the game quickly. R(s) and R(s,a) are equivalent. ️️True, it just happens that it's easier to think about one vs the other in certain situations. Reinforcement Learning is harder to compute than a simple MDP. ️️True, you can just use the Bellman Equations for an MDP, but Reinforcement Learning requires that you make observations and then summarize those observations as values. An optimal policy is the best possible sequence of actions for an MDP. ️️True, with a single caveat. The optimal policy is a policy that maximizes reward over an entire episode by taking the argmax of resulting values of actions + rewards. But MDPs are memoryless, so there is no concept of "sequence" for a policy. Temporal Difference Learning is the difference in reward you see on subsequent time steps. ️️False, Temporal Difference Learning is the difference in value estimates on subsequence time steps. RL falls generally into 3 different categories: Model-Based, Value-Based, and Policy-Based. ️️True, Model-Based is essentially using the Bellman Equations to solve a problem, Value-Based is Temporal Difference Learning, and Policy-Based is similar to Value-Based, but it solves in a finite amount of time with a certain amount of confidence (in Greedy it's guaranteed). TD Learning is defined by Incremental Estimates that are Outcome Based. ️️True, TD Learning thinks of learning in terms of "episodes", which it uses to estimate the transition functions rather than having a predefined model. For a learning rate to guarantee convergence, the sum of the learning rate must be infinite, and the sum of the learning rate squared must be finite. ️️True, this is called a contraction mapping and it guarantees convergence. All of the TD learning methods have set backs, TD(1) is inefficient because it requires too much data and has high variance, TD(0) has a maximum likelihood estimate but is hard to calculate for long episodes. ️️True, this is why we use TD(Lambda), which has many of the benefits of TD(0) but is much more performant. Empirically, lambdas between 0.3 and 0.7 seem to perform best. To control learning, you simply have the operator choose actions in addition to learning. ️️True, states are experienced as observations during learning, so the operator can influence learning. Q-Learning converges ️️True, the Bellman Equation satisfies a Contraction Mapping where the sum of all is infinite, but the sum of all squared is less than infinite. It always converges to Q*. As long as the update operators for Q-learning or Value-iteration are non-expansions, then they will converge. ️️True, there are expansions that will converge, but only non-expansions are guaranteed to converge independent of their starting v

Show more Read less
Institution
Ma-rk-ov Decision Processes Fin V2
Module
Ma-rk-ov Decision Processes Fin V2









Whoops! We can’t load your doc right now. Try again or contact support.

Written for

Institution
Ma-rk-ov Decision Processes Fin V2
Module
Ma-rk-ov Decision Processes Fin V2

Document information

Uploaded on
October 30, 2024
Number of pages
14
Written in
2024/2025
Type
Exam (elaborations)
Contains
Questions & answers

Subjects

Content preview

Markov Decision Processes Finals V2

A Markov Process is a process in which all states do not depend on previous actions. ✔️✔️True,
Markov means that you don't have to condition on anything past the most recent state. A Markov
Decision Process is a set of Markov Property Compliant states, with rewards and values.



Decaying Reward encourages the agent to end the game quickly instead of running around and
gathering more reward ✔️✔️True, as reward decays the total reward for the episode decreases, so
the agent is encouraged to maximize total reward by ending the game quickly.



R(s) and R(s,a) are equivalent. ✔️✔️True, it just happens that it's easier to think about one vs the
other in certain situations.



Reinforcement Learning is harder to compute than a simple MDP. ✔️✔️True, you can just use the
Bellman Equations for an MDP, but Reinforcement Learning requires that you make observations and
then summarize those observations as values.



An optimal policy is the best possible sequence of actions for an MDP. ✔️✔️True, with a single caveat.
The optimal policy is a policy that maximizes reward over an entire episode by taking the argmax of
resulting values of actions + rewards. But MDPs are memoryless, so there is no concept of "sequence"
for a policy.



Temporal Difference Learning is the difference in reward you see on subsequent time steps.
✔️✔️False, Temporal Difference Learning is the difference in value estimates on subsequence time
steps.



RL falls generally into 3 different categories: Model-Based, Value-Based, and Policy-Based. ✔️✔️True,
Model-Based is essentially using the Bellman Equations to solve a problem, Value-Based is Temporal
Difference Learning, and Policy-Based is similar to Value-Based, but it solves in a finite amount of time
with a certain amount of confidence (in Greedy it's guaranteed).

, TD Learning is defined by Incremental Estimates that are Outcome Based. ✔️✔️True, TD Learning
thinks of learning in terms of "episodes", which it uses to estimate the transition functions rather than
having a predefined model.



For a learning rate to guarantee convergence, the sum of the learning rate must be infinite, and the sum
of the learning rate squared must be finite. ✔️✔️True, this is called a contraction mapping and it
guarantees convergence.



All of the TD learning methods have set backs, TD(1) is inefficient because it requires too much data and
has high variance, TD(0) has a maximum likelihood estimate but is hard to calculate for long episodes.
✔️✔️True, this is why we use TD(Lambda), which has many of the benefits of TD(0) but is much more
performant. Empirically, lambdas between 0.3 and 0.7 seem to perform best.



To control learning, you simply have the operator choose actions in addition to learning. ✔️✔️True,
states are experienced as observations during learning, so the operator can influence learning.



Q-Learning converges ✔️✔️True, the Bellman Equation satisfies a Contraction Mapping where the
sum of all is infinite, but the sum of all squared is less than infinite. It always converges to Q*.



As long as the update operators for Q-learning or Value-iteration are non-expansions, then they will
converge. ✔️✔️True, there are expansions that will converge, but only non-expansions are
guaranteed to converge independent of their starting values.



A convex combination will converge. ✔️✔️False, it must be a fixed convex combination to converge. If
the value can change, like with the Boltzmann exploration, then it is not guaranteed to converge.



In Greedy Policies, the difference between the true value and the current value of the policy is less than
some epsilon value for exploration. ✔️✔️True



It serves as a good check for how long we run value iteration until we're pretty confident that we have
the optimal policy. ✔️✔️True



For a set of linear equations, the solution can be found in polynomial time. ✔️✔️True
£8.37
Get access to the full document:

100% satisfaction guarantee
Immediately available after payment
Both online and in PDF
No strings attached


Also available in package deal

Get to know the seller

Seller avatar
Reputation scores are based on the amount of documents a seller has sold for a fee and the reviews they have received for those documents. There are three levels: Bronze, Silver and Gold. The better the reputation, the more your can rely on the quality of the sellers work.
CertifiedGrades Chamberlain College Of Nursing
Follow You need to be logged in order to follow users or courses
Sold
141
Member since
2 year
Number of followers
61
Documents
8748
Last sold
4 weeks ago
High Scores

Hi there! Welcome to my online tutoring store, your ultimate destination for A+ rated educational resources! My meticulously curated collection of documents is designed to support your learning journey. Each resource has been carefully revised and verified to ensure top-notch quality, empowering you to excel academically. Feel free to reach out to consult with me on any subject matter—I'm here to help you thrive!

3.9

38 reviews

5
21
4
6
3
2
2
3
1
6

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their exams and reviewed by others who've used these revision notes.

Didn't get what you expected? Choose another document

No problem! You can straightaway pick a different document that better suits what you're after.

Pay as you like, start learning straight away

No subscription, no commitments. Pay the way you're used to via credit card and download your PDF document instantly.

Student with book image

“Bought, downloaded, and smashed it. It really can be that simple.”

Alisha Student

Frequently asked questions