100% tevredenheidsgarantie Direct beschikbaar na je betaling Lees online óf als PDF Geen vaste maandelijkse kosten
logo-home
CS 234 assignment 2 Updated-ALL ANSWERS 100% CORRECT Study Guide $6.99
In winkelwagen

Tentamen (uitwerkingen)

CS 234 assignment 2 Updated-ALL ANSWERS 100% CORRECT Study Guide

1 beoordeling
 4 keer verkocht
  • Vak
  • Instelling

CS 234 Winter 2022: Assignment #2 Introduction In this assignment we will implement deep Q-learning, following DeepMind’s paper ([1] and [2]) that learns to play Atari games from raw pixels. The purpose is to demonstrate the effectiveness of deep neural networks as well as some of the tec...

[Meer zien]

Voorbeeld 2 van de 12  pagina's

  • 23 maart 2022
  • 12
  • 2022/2023
  • Tentamen (uitwerkingen)
  • Vragen en antwoorden

1  beoordeling

review-writer-avatar

Door: perestrelka • 2 jaar geleden

reply-writer-avatar

Door: AMAZONEDUCATER • 2 jaar geleden

Thank you for review! wish you well in you're exam.

avatar-seller
CS 234 Winter 2022: Assignment #2

Due date:
Part 1 (0-4): February 5, 2022 at 6 PM (18:00) PST
Part 2 (5-6): February 12, 2022 at 6 PM (18:00) PST

These questions require thought, but do not require long answers. Please be as concise as possible.

We encour age st udents to discuss in groups for assignme nts. We ask that you abide by the university
Honor Code and that of the Computer Science department. If you have disc ussed the proble ms w ith
others, ple ase include a st atement saying who you discussed problem s with. Failure to follow the se
instructions will be reported to the Office of Comm unity Standar ds. We reserve the right to run a fr aud-
detection softw are on your code . Please refer to we bsite, Ac ademic Collabor ation and Misconduct
section for details about collaboration policy.
Ple ase r eview any addit ional instr uctions poste d on the assignme nt page. W he n you are r e ady t o
submit , ple ase follow the instructions on t he c our se w ebsit e. Make sure you te st your code using
the provided commands and do not edit outside of the marked areas.

You’ll need to dow nload the st arter code and fill the appr opriate functions following t he instructions
from the handout and the code’s documentation. Training DeepMind’s network on Pong takes roughly
12 hours on GPU, so ple ase st art e ar ly! (O nly a com pleted r un w ill recieve full credit) We w ill give
you access to an Azure GPU cluster. You’ll find the setup instructions on the course assignment page.



Introduction
In this assignment we w ill impleme nt deep Q-le arning, following DeepMind’s paper ([1] and [2]) that le arns
to play Atari games from r aw pixels. T he pur pose is to demonstrate the effectivene ss of dee p neur al networks
as well as some of the technique s use d in practice to stabilize tr aining and achieve better per form ance . In
the process, you’ll become familiar with PyTorch. We will train our networks on the Pong-v0 environment
from OpenAI gym, but the code can easily be applied to any other environment.

In Pong, one player score s if the ball passe s by the other player. A n episode is over whe n one of the player s
reaches 21 points. Thus, the tot al return of an e pisode is between −21 (lost every point) and +21 (won
every point). Our age nt plays against a decent hard-coded AI player. Aver age hum an performance is −3
(reported in [2]). In this assignme nt, you will train an AI agent with super -hum an perform ance, reac hing at
least +10 (hopefully more!).




1

, 0 Distributions induced by a policy (13 pts)
In this problem, we’ll wor k with an infinite -horizon MDP M = ⟨ S, A, R, T , γ⟩ and consider stochastic policie s
1
of t he for m π : S → ∆(A) . Additionally, we ’ll assum e that M has a single , fixe d st arting st at e s0 ∈ S for
simplicity.

(a) (written, 3 pts) Consider a fixed stoc hastic policy and im agine running sever al r ollouts of this policy
within the environment. N aturally, de pending on the stoc hasticity of the MDP M and the policy itself,
π
some trajectories are more likely than others. Write down an expression for ρ (τ ), the likelihood of
sampling a trajectory τ = (s0 , a0 , s1 , a1 , . . .) b y running π in M. To put this distribution in context,
Σ t
recall that V (s0 ) = Eτ ∼ ρ π ∞ γ R(st, at) |s0 .
π
t=0
Solution:
Y ∞
π
ρ (τ ) = π(at|st)T (st+1 |st, a t)
t=0


π
(b) (written, 5 pts) Just as ρ c apture s the distribution over trajectorie s induced by π, we can also ex-
amine the distribution over states induced by π. In particular, define the discou nted , statio nary state
distribution of a policy π as
Σ ∞
π t
d (s) = (1 − γ) γ p(st = s),
t=0

whe re p(st = s) de note s the pr obabilit y of being in st at e s at time ste p t w hile follow ing polic y π; your
answer to the previous part should help you reason about how you might com pute this value . Consider
an arbitrary function f : S × A → R. Prove the following identity:
"∞ #
Σ t 1
Eτ ∼ρ π γ f (st, at ) = E s∼d π E a ∼ π(s) [f (s, a)] .
(1 − γ)
t=0

Hint: You may find it helpful to first consider how things work out for f (s, a) = 1, ∀(s, a) ∈ S × A.
Hint: What is p(st = s)?
Solution:
"∞ #
Σ t ∞
Σ
Eτ ∼ρ π γ f (st, a t) = γ t Eτ ∼ ρ π [f (st, at)]
t=0 t=0
2
= E τ∼ ρ π [f (s0 , a0 )] + γE τ∼ ρ π [f (s1 , a1 )] + γ E τ∼ ρ π [f (s2 , a2 )] + ...
Σ Σ Σ Σ
= π(a 0 |s0 )f (s0 , a0 ) + γ π(a0 |s0 ) T (s1 |s0 , a0 ) π(a1 |s1 )f (s1 , a1 ) + ...
a0 a0 s1 a1
Σ Σ
= p(s 0 = s)E a ∼ π(s)[f (s, a)] + γ p(s 1 = s)E a ∼ π(s)[f (s, a)] + ...
s s

ΣΣ t
= γ p(s t = s)E a ∼ π(s)[f (s, a)]
s t=0 1
1 Σ dπ (s)E a ∼ π(s)[f (s, a)] = E s∼ d π E a ∼ π(s) [f (s, a)]
= (1 − γ)
(1 − γ) s




1For a finite set X , ∆(X ) refers to the set of categorical distributions with support on X or, equivalently, the ∆ |X |−1

probability simplex.

Dit zijn jouw voordelen als je samenvattingen koopt bij Stuvia:

Bewezen kwaliteit door reviews

Bewezen kwaliteit door reviews

Studenten hebben al meer dan 850.000 samenvattingen beoordeeld. Zo weet jij zeker dat je de beste keuze maakt!

In een paar klikken geregeld

In een paar klikken geregeld

Geen gedoe — betaal gewoon eenmalig met iDeal, Bancontact of creditcard en je bent klaar. Geen abonnement nodig.

Focus op de essentie

Focus op de essentie

Studenten maken samenvattingen voor studenten. Dat betekent: actuele inhoud waar jij écht wat aan hebt. Geen overbodige details!

Veelgestelde vragen

Wat krijg ik als ik dit document koop?

Je krijgt een PDF, die direct beschikbaar is na je aankoop. Het gekochte document is altijd, overal en oneindig toegankelijk via je profiel.

Tevredenheidsgarantie: hoe werkt dat?

Onze tevredenheidsgarantie zorgt ervoor dat je altijd een studiedocument vindt dat goed bij je past. Je vult een formulier in en onze klantenservice regelt de rest.

Van wie koop ik deze samenvatting?

Stuvia is een marktplaats, je koop dit document dus niet van ons, maar van verkoper AMAZONEDUCATER. Stuvia faciliteert de betaling aan de verkoper.

Zit ik meteen vast aan een abonnement?

Nee, je koopt alleen deze samenvatting voor $6.99. Je zit daarna nergens aan vast.

Is Stuvia te vertrouwen?

4,6 sterren op Google & Trustpilot (+1000 reviews)

Afgelopen 30 dagen zijn er 64670 samenvattingen verkocht

Opgericht in 2010, al 15 jaar dé plek om samenvattingen te kopen

Start met verkopen

Laatst bekeken door jou


$6.99  4x  verkocht
  • (1)
In winkelwagen
Toegevoegd