Introduction To Reinforcement Learning - Wnzhang

1y ago
15 Views
2 Downloads
4.85 MB
133 Pages
Last View : 5m ago
Last Download : 2m ago
Upload by : Nadine Tse
Transcription

PART IIntroduction toReinforcement LearningWeinan ZhangShanghai Jiao Tong Universityhttp://wnzhang.netOct 14 2018, SJTU

What is Machine LearningA more mathematical definition by Tom Mitchell Machine learning is the study of algorithms that improve their performance Pat some task Tbased on experience Ewith non-explicit programming A well-defined learning task is given by P, T, E

Supervised Learning Given the training dataset of (data, label) pairs,D f(xi ; yi )gi 1;2;:::;Nlet the machine learn a function from data to labelyi ' fμ (xi ) Learning is referred to as updating the parameter μ Learning objective: make the prediction close tothe ground truthN1 XminL(yi ; fμ (xi ))μ Ni 1

Unsupervised Learning Given the training datasetD fxi gi 1;2;:::;Nlet the machine learn the data underlying patterns Sometimes build latent variablesz!x Estimate the probabilistic density function (p.d.f.)Xp(x; μ) p(xjz; μ)p(z; μ)z Maximize the log-likelihood of training dataN1 Xmaxlog p(x; μ)μ Ni 1

Two Kinds of Machine Learning Prediction Predict the desired output given the data (supervisedlearning) Generate data instances (unsupervised learning) We mainly covered this category in previous lectures Decision Making Take actions based on a particular state in a dynamicenvironment (reinforcement learning) to transit to new states to receive immediate reward to maximize the accumulative reward over time Learning from interaction

Machine Learning Categories Supervised Learning To perform the desired output given thedata and labels Unsupervised Learning To analyze and make use of the underlyingdata patterns/structuresp(yjx)p(x) Reinforcement Learning To learn a policy of taking actions in adynamic environment and acquire rewards¼(ajx)

RL Use Case 1: Interactive Recommendation Douban.fm music recommend and feedback The machine needs to make decisions, not just predictionXiaoxue Zhao, Weinan Zhang et al. Interactive Collaborative Filtering. CIKM 2013.

RL Use Case 2: Robotics Control Stanford Autonomous Helicopter http://heli.stanford.edu/

RL Use Case 3: Robotics Control Ping pong robot https://www.youtube.com/watch?v tIIJME8-au8

RL Use Case 4: Self-Driving Cars Google Self-Driving Cars https://www.google.com/selfdrivingcar/

RL Use Case 5: Game Playing Take actions given screen pixels https://gym.openai.com/envs#atariMnih, Volodymyr, et al. "Human-level control through deep reinforcement learning." Nature 518.7540 (2015): 529-533.

Reinforcement Learning MaterialsOur lecture on RL is mainly based on the materials from these masters.Prof. Richard Sutton University of Alberta, Canada http://incompleteideas.net/sutton/index.html Reinforcement Learning: An Introduction (2nd edition) tmlProf. David Silver Google DeepMind and UCL, UK tml UCL Reinforcement Learning Course ng.htmlProf. Andrew Ng Stanford University, US http://www.andrewng.org/ Machine Learning (CS229) Lecture Notes 12: RL http://cs229.stanford.edu/materials.html

Content Introduction to Reinforcement Learning Model-based Reinforcement Learning Markov Decision Process Planning by Dynamic Programming Model-free Reinforcement Learning On-policy SARSA Off-policy Q-learning Model-free Prediction and Control

Content Introduction to Reinforcement Learning Model-based Reinforcement Learning Markov Decision Process Planning by Dynamic Programming Model-free Reinforcement Learning On-policy SARSA Off-policy Q-learning Model-free Prediction and Control

Reinforcement Learning Learning from interaction Given the current situation, what to do next in order tomaximize utility?AgentObservationActionReward

Reinforcement Learning Definition A computational approach by learning frominteraction to achieve a goalAgentObservationActionReward Three aspects Sensation: sense the state of the environment to someextent Action: able to take actions that affect the state andachieve the goal Goal: maximize the cumulative reward over time

Reinforcement LearningAgent At each step t, the agent Receives observation Ot Receives scalar reward Rt Executes action At The environment Receives action At Emits observation Ot 1 Emits scalar reward Rt 1 t increments atenvironment stepEnvironment

Elements of RL Systems History is the sequence of observations, action, rewardsHt O1 ; R1 ; A1 ; O2 ; R2 ; A2 ; : : : ; Ot¡1 ; Rt¡1 ; At¡1 ; Ot ; Rt i.e. all observable variables up to time t E.g., the sensorimotor stream of a robot or embodied agent What happens next depends on the history: The agent selects actions The environment selects observations/rewards State is the information used to determine what happensnext (actions, observations, rewards) Formally, state is a function of the historySt f (Ht )

Elements of RL Systems Policy is the learning agent’s way of behaving at agiven time It is a map from state to action Deterministic policya ¼(s) Stochastic policy¼(ajs) P (At ajSt s)

Elements of RL Systems Reward A scalar defining the goal in an RL problem For immediate sense of what is good Value function State value is a scalar specifying what is good in the longrun Value function is a prediction of the cumulative futurereward Used to evaluate the goodness/badness of states (given thecurrent policy)v¼ (s) E¼ [Rt 1 Rt 2 2 Rt 3 : : : jSt s]

Elements of RL Systems A Model of the environmentthat mimics the behavior ofthe environment Predict the next statea0Pss0 P[St 1 s jSt s; At a] Predicts the next(immediate) rewardRas E[Rt 1 jSt s; At a]

Maze Example State: agent’s location Action: N,E,S,W

Maze Example State: agent’s location Action: N,E,S,W State transition: moveto the next gridaccording to the action No move if the action isto the wall

Maze Example State: agent’s location Action: N,E,S,W State transition: moveto the next gridaccording to the action Reward: -1 per timestep

Maze Example State: agent’s location Action: N,E,S,W State transition: moveto the next gridaccording to the action Reward: -1 per timestep Given a policy as shown above Arrows represent policy π(s) for each state s

Maze Example-14 -13 -12 -11 -10 -9-16 -15-12-16 -17-24-6-18 -19-5-20-4-23 -22 -21 -22 -8-7-3-2-1 State: agent’s location Action: N,E,S,W State transition: moveto the next gridaccording to the action Reward: -1 per timestepNumbers represent value vπ(s) of each state s

Categorizing RL Agents Model based RL Policy and/or value function Model of the environment E.g., the maze game above, game of Go Model-free RL Policy and/or value function No model of the environment E.g., general playing Atari games

Atari Example Rules of the gameare unknown Learn frominteractive game-play Pick actions onjoystick, see pixelsand scores

Categorizing RL Agents Value based No policy (implicit) Value function Policy based Policy No value function Actor Critic Policy Value function

Content Introduction to Reinforcement Learning Model-based Reinforcement Learning Markov Decision Process Planning by Dynamic Programming Model-free Reinforcement Learning On-policy SARSA Off-policy Q-learning Model-free Prediction and Control

Markov Decision Process Markov decision processes (MDPs) provide amathematical framework for modeling decision making in situations where outcomes are partly random and partly under thecontrol of a decision maker.

Markov Decision Process Markov decision processes (MDPs) provide amathematical framework for modeling decision making in situations where outcomes are partly random and partly under thecontrol of a decision maker. MDPs formally describe an environment for RL where the environment is FULLY observable i.e. the current state completely characterizes theprocess (Markov property)

Markov Property“The future is independent of the past given the present” Definition A state St is Markov if and only ifP[St 1 jSt ] P[St 1 jS1 ; : : : ; St ] Properties The state captures all relevant information from thehistory Once the state is known, the history may be thrown away i.e. the state is sufficient statistic of the future

Markov Decision Process A Markov decision process is a tuple (S, A, {Psa}, γ, R) S is the set of states E.g., location in a maze, or current screen in an Atari game A is the set of actions E.g., move N, E, S, W, or the direction of the joystick and thebuttons Psa are the state transition probabilities For each state s S and action a A, Psa is a distribution over thenext state in S γ [0,1] is the discount factor for the future reward R : S A 7! R is the reward function Sometimes the reward is only assigned to state

Markov Decision ProcessThe dynamics of an MDP proceeds as Start in a state s0 The agent chooses some action a0 AThe agent gets the reward R(s0,a0)MDP randomly transits to some successor state s1 » Ps0 a0This proceeds iterativelyaaaR(s0 ;a0 )R(s1 ;a1 )R(s2 ;a2 )012s0 ¡¡¡¡¡! s1 ¡¡¡¡¡! s2 ¡¡¡¡¡! s3 Until a terminal state sT or proceeds with no end The total payoff of the agent isR(s0 ; a0 ) R(s1 ; a1 ) 2 R(s2 ; a2 )

Reward on State Only For a large part of cases, reward is only assigned tothe state E.g., in maze game, the reward is on the location In game of Go, the reward is only based on the finalterritory The reward function R(s) : S 7! R MDPs proceedaaaR(s0 )R(s1 )R(s2 )012s0 ¡¡¡! s1 ¡¡¡! s2 ¡¡¡! s3 cumulative reward (total payoff)R(s0 ) R(s1 ) 2 R(s2 )

MDP Goal and Policy The goal is to choose actions over time to maximize theexpected cumulative rewardE[R(s0 ) R(s1 ) 2 R(s2 ) ] γ [0,1] is the discount factor for the future reward, whichmakes the agent prefer immediate reward to future reward In finance case, today’s 1 is more valuable than 1 in tomorrow Given a particular policy ¼(s) : S 7! A i.e. take the action a ¼(s) at state s Define the value function for ¼V ¼ (s) E[R(s0 ) R(s1 ) 2 R(s2 ) js0 s; ¼] i.e. expected cumulative reward given the start state and takingactions according to ¼

Bellman Equation for Value Function Define the value function for ¼V ¼ (s) E[R(s0 ) R(s1 ) 2 R(s2 ) js0 s; ¼]{z} X R(s) V ¼ (s1 )Ps¼(s) (s0 )V ¼ (s0 )s0 2SImmediateRewardStatetransitionTimedecayValue ofthe nextstateBellman Equation

Optimal Value Function The optimal value function for each state s is best possiblesum of discounted rewards that can be attained by any policyV (s) max V ¼ (s)¼ The Bellman’s equation for optimal value functionX V (s) R(s) max Psa (s0 )V (s0 )a2As0 2S The optimal policyX ¼ (s) arg maxPsa (s0 )V (s0 )a2As0 2S For every state s and every policy ¼ V (s) V¼ (s) V ¼ (s)

Value Iteration & Policy Iteration Note that the value function and policy are correlatedX¼V (s) R(s) Ps¼(s) (s0 )V ¼ (s0 )s0 2SX¼(s) arg maxa2APsa (s0 )V ¼ (s0 )s0 2S It is feasible to perform iterative update towards the optimalvalue function and optimal policy Value iteration Policy iteration

Value Iteration For an MDP with finite state and action spacesjSj 1; jAj 1 Value iteration is performed as1. For each state s, initialize V(s) 0.2. Repeat until convergence {For each state, updateXV (s) R(s) max a2APsa (s0 )V (s0 )s0 2S} Note that there is no explicit policy in above calculation

Synchronous vs. Asynchronous VI Synchronous value iteration stores two copies of valuefunctions1. For all s in SÃXVnew (s) Ã max R(s) a2A!Psa (s0 )Vold (s0 )s0 2S2. Update Vold (s0 ) Ã Vnew (s) In-place asynchronous value iteration stores one copy ofvalue function1. For all s in SÃXV (s) Ã max R(s) a2As0 2S!Psa (s0 )V (s0 )

Value Iteration Example: Shortest Path

Policy Iteration For an MDP with finite state and action spacesjSj 1; jAj 1 Policy iteration is performed as1. Initialize π randomly2. Repeat until convergence {a) Let V : V ¼b) For each state, updateX¼(s) arg maxa2A}Psa (s0 )V (s0 )s0 2S The step of value function update could be time-consuming

Policy Iteration Policy evaluation Estimate Vπ Iterative policy evaluation Policy improvement Generate ¼ 0 ¼ Greedy policy improvement

Evaluating a Random Policy in a Small Gridworldr -1on all transitions Undiscounted episodic MDP (γ 1) Nonterminal states 1, ,14 Two terminal states (shaded squares) Actions leading out of the grid leave state unchanged Reward is -1 until the terminal state is reached Agent follows a uniform random policy¼(nj ) ¼(ej ) ¼(sj ) ¼(wj ) 0:25

Evaluating a Random Policy in a Small GridworldVk for therandom policyK 0K 1K 2Greedy policyw.r.t. VkRandom policy

Evaluating a Random Policy in a Small GridworldVk for therandom policyGreedy policyw.r.t. VkK 3V : V ¼K 10K Optimal policy

Value Iteration vs. Policy IterationValue iterationPolicy iteration1.For each state s, initialize V(s) 0.1.Initialize π randomly2.Repeat until convergence {2.Repeat until convergence {a) Let V : V ¼b) For each state, updateFor each state, updateXV (s) R(s) max a2APsa (s0 )V (s0 )Xs0 2S¼(s) arg maxa2A}Psa (s0 )V (s0 )s0 2S}Remarks:1. Value iteration is a greedy update strategy2. In policy iteration, the value function update by bellman equation is costly3. For small-space MDPs, policy iteration is often very fast and converges quickly4. For large-space MDPs, value iteration is more practical (efficient)5. If there is no state-transition loop, it is better to use value iterationMy point of view: value iteration is like SGD and policy iteration is like BGD

Learning an MDP Model So far we have been focused on Calculating the optimal value function Learning the optimal policygiven a known MDP model i.e. the state transition Psa(s’) and reward function R(s) are explicitlygiven In realistic problems, often the state transition and rewardfunction are not explicitly given For example, we have only observed some episodesEpisode 1:(1)s0Episode 2:(2)s0(1)a0¡¡¡¡¡!R(s0 )(1)(2)a0¡¡¡¡¡!R(s0 )(2)(1)s1(2)s1(1)a1¡¡¡¡¡!R(s1 )(1)(2)a1¡¡¡¡¡!R(s1 )(2)(1)s2(2)s2(1)a2(1)(1)(2)(2)¡¡¡¡¡! s3 sTR(s2 )(1)(2)a2¡¡¡¡¡! s3 sTR(s2 )(2)

Learning an MDP ModelEpisode 1:(1)s0Episode 2:(2)s0(1)a0¡¡¡¡¡!R(s0 )(1)(2)a0¡¡¡¡¡!R(s0 )(2)(1)s1(2)s1(1)a1¡¡¡¡¡!R(s1 )(1)(2)a1¡¡¡¡¡!R(s1 )(2)(1)s2(2)s2(1)a2(1)(1)(2)(2)¡¡¡¡¡! s3 sTR(s2 )(1)(2)a2¡¡¡¡¡! s3 sTR(s2 )(2) Learn an MDP model from “experience” Learning state transition probabilities Psa(s’)#times we took action a in state s and got to state s0Psa (s ) #times we took action a in state s0 Learning reward R(s), i.e. the expected immediate rewardno(i)R(s) average R(s)

Learning Model and Optimizing Policy Algorithm1. Initialize π randomly.2. Repeat until convergence {a) Execute π in the MDP for some number of trialsb) Using the accumulated experience in the MDP, update ourestimates for Psa and Rc) Apply value iteration with the estimated Psa and R to get thenew estimated value function Vd) Update π to be the greedy policy w.r.t. V}

Learning an MDP Model In realistic problems, often the state transition and rewardfunction are not explicitly given For example, we have only observed some episodesEpisode 1:(1)s0Episode 2:(2)s0(1)a0¡¡¡¡¡!R(s0 )(1)(2)a0¡¡¡¡¡!R(s0 )(2)(1)s1(2)s1(1)a1¡¡¡¡¡!R(s1 )(1)(2)a1¡¡¡¡¡!R(s1 )(2)(1)s2(2)s2(1)a2(1)(1)(2)(2)¡¡¡¡¡! s3 sTR(s2 )(1)(2)a2¡¡¡¡¡! s3 sTR(s2 )(2) Another branch of solution is to directly learning value &policy from experience without building an MDP i.e. Model-free Reinforcement Learning

Content Introduction to Reinforcement Learning Model-based Reinforcement Learning Markov Decision Process Planning by Dynamic Programming Model-free Reinforcement Learning Model-free Prediction Monte-Carlo and Temporal Difference Model-free Control On-policy SARSA and off-policy Q-learning

Content Introduction to Reinforcement Learning Model-based Reinforcement Learning Markov Decision Process Planning by Dynamic Programming Model-free Reinforcement Learning Model-free Prediction Monte-Carlo and Temporal Difference Model-free Control On-policy SARSA and off-policy Q-learning

Model-free Reinforcement Learning In realistic problems, often the state transition and rewardfunction are not explicitly given For example, we have only observed some episodesEpisode 1:(1)s0Episode 2:(2)s0(1)a0¡¡¡¡¡!R(s0 )(1)(2)a0¡¡¡¡¡!R(s0 )(2)(1)s1(2)s1(1)a1¡¡¡¡¡!R(s1 )(1)(2)a1¡¡¡¡¡!R(s1 )(2)(1)s2(2)s2(1)a2(1)(1)(2)(2)¡¡¡¡¡! s3 sTR(s2 )(1)(2)a2¡¡¡¡¡! s3 sTR(s2 )(2) Model-free RL is to directly learn value & policy fromexperience without building an MDP Key steps: (1) estimate value function; (2) optimize policy

Value Function Estimation In model-based RL (MDP), the value function iscalculated by dynamic programmingV ¼ (s) E[R(s0 ) R(s1 ) 2 R(s2 ) js0 s; ¼]X R(s) Ps¼(s) (s0 )V ¼ (s0 )s0 2S Now in model-free RL We cannot directly know Psa and R But we have a list of experiences to estimate the valuesEpisode 1:(1)s0Episode 2:(2)s0(1)a0¡¡¡¡¡!R(s0 )(1)(2)a0¡¡¡¡¡!R(s0 )(2)(1)s1(2)s1(1)a1¡¡¡¡¡!R(s1 )(1)(2)a1¡¡¡¡¡!R(s1 )(2)(1)s2(2)s2(1)a2(1)(1)(2)(2)¡¡¡¡¡! s3 sTR(s2 )(1)(2)a2¡¡¡¡¡! s3 sTR(s2 )(2)

Monte-Carlo Methods Monte-Carlo methods are a broad class ofcomputational algorithms that rely on repeatedrandom sampling to obtain numerical results. Example, to calculate the circle’s surface#points in circleCircle Surface Square Surface #points in total

Monte-Carlo Methods Go: to estimate the winning rate given the current state#win simulation cases started from sWin Rate(s) #simulation cases started from s in total

Monte-Carlo Value Estimation Goal: learn Vπ from episodes of experience under policy π(i)(i) a0s0 ¡¡!(i)R1(i)(i) a1s1 ¡¡!(i)R2(i)(i) a2s2 ¡¡!(i)R3(i)(i)s3 sT » ¼ Recall that the return is the total discounted rewardGt Rt 1 Rt 2 : : : T ¡1 RT Recall that the value function is the expected returnV ¼ (s) E[R(s0 ) R(s1 ) 2 R(s2 ) js0 s; ¼] E[Gt jst s; ¼]N1 X (i)Gt'Ni 1 Sample N episodes from state s using policy π Calculate the average of cumulative reward Monte-Carlo policy evaluation uses empirical mean return instead of expectedreturn

Monte-Carlo Value EstimationIdea:N1 X (i)V (St ) 'GtNi 1Implementation:V (St ) Ã V (St ) (Gt ¡ V (St )) MC methods learn directly from episodes of experience MC is model-free: no knowledge of MDP transitions / rewards MC learns from complete episodes: no bootstrapping (discussedlater) MC uses the simplest possible idea: value mean return Caveat: can only apply MC to episodic MDPs All episodes must terminate

Temporal-Difference LearningGt Rt 1 Rt 2 2 Rt 3 : : : Rt 1 V (St 1 )V (St ) Ã V (St ) (Rt 1 V (St 1 ) ¡ V (St ))ObservationGuess offuture TD methods learn directly from episodes of experience TD is model-free: no knowledge of MDP transitions /rewards TD learns from incomplete episodes, by bootstrapping TD updates a guess towards a guess

Monte Carlo vs. Temporal Difference The same goal: learn Vπ from episodes of experience underpolicy π Incremental every-visit Monte-Carlo Update value V(St) toward actual return GtV (St ) Ã V (St ) (Gt ¡ V (St )) Simplest temporal-difference learning algorithm: TD Update value V(St) toward estimated return Rt 1 V (St 1 )V (St ) Ã V (St ) (Rt 1 V (St 1 ) ¡ V (St )) TD target: Rt 1 V (St 1 ) TD error: t Rt 1 V (St 1 ) ¡ V (St )

Driving Home ExampleStateElapsed Time(Minutes)PredictedTime to GoPredictedTotal TimeLeaving office03030Reach car,raining53540Exit highway201535Behind truck301040Home street40343Arrow home43043

Driving Home Example: MC vs. TD

Advantages and Disadvantages of MC vs. TD TD can learn before knowing the final outcome TD can learn online after every step MC must wait until end of episode before return isknown TD can learn without the final outcome TD can learn from incomplete sequencesMC can only learn from complete sequencesTD works in continuing (non-terminating) environmentsMC only works for episodic (terminating) environments

Bias/Variance Trade-Off Return Gt Rt 1 Rt 2 : : : T ¡1 RT is unbiasedestimate of Vπ(St) True TD target Rt 1 V ¼ (St 1 ) is unbiased estimate ofVπ(St) TD target Rt 1 V (St 1 ) is biased estimate of Vπ(St) {z }current estimate TD target is of much lower variance than the return Return depends on many random actions, transitions and rewards TD target depends on one random action, transition and reward

Advantages and Disadvantages of MC vs. TD (2) MC has high variance, zero bias Good convergence properties(even with function approximation)Not very sensitive to initial valueVery simple to understand and use TD has low variance, some bias Usually more efficient than MC TD converges to Vπ(St) (but not always with function approximation) More sensitive to initial value than MC

Random Walk Example

Random Walk ExampleV (St ) Ã V (St ) (Gt ¡ V (St ))V (St ) Ã V (St ) (Rt 1 V (St 1 ) ¡ V (St ))

Monte-Carlo BackupV (St ) Ã V (St ) (Gt ¡ V (St ))

Temporal-Difference BackupV (St ) Ã V (St ) (Rt 1 V (St 1 ) ¡ V (St ))

Dynamic Programming BackupV (St ) Ã E[Rt 1 V (St 1 )]

Content Introduction to Reinforcement Learning Model-based Reinforcement Learning Markov Decision Process Planning by Dynamic Programming Model-free Reinforcement Learning Model-free Prediction Monte-Carlo and Temporal Difference Model-free Control On-policy SARSA and off-policy Q-learning

Uses of Model-Free Control Some example problems that can be modeled as MDPs Elevator Robocup soccer Parallel parking Atari & StarCraft Ship steering Portfolio management Bioreactor Protein folding Helicopter Robot walking Aeroplane logistics Game of Go For most of real-world problems, either: MDP model is unknown, but experience can be sampled MDP model is known, but is too big to use, except by samples Model-free control can solve these problems

On- and Off-Policy Learning Two categories of model-free RL On-policy learning “Learn on the job” Learn about policy π from experience sampled from π Off-policy learning “Look over someone’s shoulder” Learn about policy π from experience sampled fromanother policy μ

State Value and Action ValueGt Rt 1 Rt 2 : : : T ¡1 RT State value The state-value function Vπ(s) of an MDP is the expectedreturn starting from state s and then following policy πV ¼ (s) E¼ [Gt jSt s] Action value The action-value function Qπ(s,a) of an MDP is theexpected return starting from state s, taking action a,and then following policy πQ¼ (s; a) E¼ [Gt jSt s; At a]

Bellman Expectation Equation The state-value function Vπ(s) can be decomposedinto immediate reward plus discounted value ofsuccessor stateV ¼ (s) E¼ [Rt 1 V ¼ (St 1 )jSt s] The action-value function Qπ(s,a) can similarly bedecomposedQ¼ (s; a) E¼ [Rt 1 Q¼ (St 1 ; At 1 )jSt s; At a]

State Value and Action ValueV ¼ (s) Ã s¼XV (s) ¼(ajs)Q¼ (s; a)a2AQ¼ (s; a) Ã s; aQ¼ (s; a) Ã s; aR(s; a)¼XQ (s; a) R(s; a) s0 2SV ¼ (s0 ) Ã s0Psa (s0 )V ¼ (s0 )

Model-Free Policy Iteration Given state-value function V(s) and action-value functionQ(s,a), model-free policy iteration shall use action-valuefunction Greedy policy improvement over V(s) requires model of MDPonX¼ new (s) arg max R(s; a) Psa (s0 )V ¼ (s0 )a2As0 2SWe don’t know the transition probability Greedy policy improvement over Q(s,a) is model-free¼ new (s) arg max Q(s; a)a2A

Generalized Policy Iteration with Action-Value Function Policy evaluation: Monte-Carlo policy evaluation, Q Qπ Policy improvement: Greedy policy improvement?

Example of Greedy Action Selection Greedy policy improvementover Q(s,a) is model-freeLeft:20% Reward 080% Reward 5Right:50% Reward 150% Reward 3¼ new (s) arg max Q(s; a)a2A Given the right example What if the first action is tochoose the left door andobserve reward 0? The policy would besuboptimal if there is noexploration“Behind one door is tenure – behind the otheris flipping burgers at McDonald’s.”

ɛ-Greedy Policy Exploration Simplest idea for ensuring continual exploration All m actions are tried with non-zero probability With probability 1-ɛ, choose the greedy action With probability ɛ, choose an action at random(² m 1 ¡ ²¼(ajs) ² mif a arg maxa2A Q(s; a)otherwise Theorem For any ɛ-greedy policy π, the ɛ-greedy policy π’ w.r.t. Qπ¼0is an improvement, i.e. V (s) V ¼ (s)

Generalized Policy Iteration with Action-Value Function Policy evaluation: Monte-Carlo policy evaluation, Q Qπ Policy improvement: ɛ-greedy policy improvement

Monte-Carlo ControlEvery episode: Policy evaluation: Monte-Carlo policy evaluation, Q Qπ Policy improvement: ɛ-greedy policy improvement

MC Control vs. TD Control Temporal-difference (TD) learning has several advantagesover Monte-Carlo (MC) Lower variance Online Incomplete sequences Natural idea: use TD instead of MC in our control loop Apply TD to update action value Q(s,a) Use ɛ-greedy policy improvement Update the action value function every time-step

SARSA For each state-action-reward-state-action by the currentpolicyAt state s, take action aObserve reward rTransit to the next state s’At state s’, take action a’ Updating action-value functions with SarsaQ(s; a) Ã Q(s; a) (r Q(s0 ; a0 ) ¡ Q(s; a))

On-Policy Control with SARSAEvery time-step: Policy evaluation: Sarsa Q(s; a) Ã Q(s; a) (r Q(s0 ; a0 ) ¡ Q(s; a)) Policy improvement: ɛ-greedy policy improvement

SARSA Algorithm NOTE: on-policy TD control sample actions by the current policy, i.e., thetwo ‘A’s in SARSA are both chosen by the current policy

SARSA Example: Windy Gridworld Reward -1 per time-step until reaching goal Undiscounted

SARSA Example: Windy Gridworldoptimal a trajectoryNote: as the training proceeds, the Sarsa policy achieves the goal more andmore quickly

Off-Policy Learning Evaluate target policy π(a s) to compute Vπ(s) or Qπ(s,a) While following behavior policy μ(a s)fs1 ; a1 ; r2 ; s2 ; a2 ; : : : ; sT g » ¹ Why off-policy learning is important? Learn from observing humans or other agentsRe-use experience generated from old policiesLearn about optimal policy while following exploratory policyLearn about multiple policies while following one policyAn example of my research in MSR Cambridge Collective Noise Contrastive Estimation for Policy Transfer Learning. AAAI 2016.

Q-Learning For off-policy learning of action-value Q(s,a)No importance sampling is required (why?)The next action is chosen using behavior policy at 1 » ¹( jst )But we consider alternative successor action a » ¼( jst )And update Q(st,at) towards value of alternative actionQ(st ; at ) Ã Q(st ; at ) (rt 1 Q(st 1 ; a0 ) ¡ Q(st ; at ))actionfrom πnot μ

Off-Policy Control with Q-Learning Allow both behavior and target policies to improve The target policy π is greedy w.r.t. Q(s,a)0¼(st 1 ) arg maxQ(s;a)t 10a The behavior policy μ is e.g. ɛ-greedy policy w.r.t. Q(s,a) The Q-learning target then simplifies0rt 1 Q(st 1 ; a0 ) rt 1 Q(st 1 ; arg maxQ(s;a))t 10a0 rt 1 maxQ(s;a)t 10a Q-learning update0Q(st ; at ) Ã Q(st ; at ) (rt 1 maxQ(s;a) ¡ Q(st ; at ))t 10a

Q-Learning Control AlgorithmAt state s, take action aObserve reward rTransit to the next state s’At state s’, take action argmax Q(s’,a’)0Q(st ; at ) Ã Q(st ; at ) (rt 1 maxQ(s;a) ¡ Q(st ; at ))t 10a Theorem: Q-learning control converges to the optimalaction-value functionQ(s; a) ! Q (s; a)

Q-Learning Control AlgorithmAt state s, take action aObserve reward rTransit to the next state s’At state s’, take action argmax Q(s’,a’)0Q(st ; at ) Ã Q(st ; at ) (rt 1 maxQ(s;a) ¡ Q(st ; at ))t 10a Why Q-learning is an off-policy control method? Learning from SARS generated by another policy μ The first action a and the corresponding reward r are from μ The next action a’ is picked by the target policy0¼(st 1 ) arg maxQ(s;a)t 10a

SARSA vs. Q-Learning Experiments Cliff-walking Undiscountedreward Episodic task Reward -1 on alltransitions Stepping into cliffarea incurs -100reward and sent theagent back to thestart Why the results arelike this?SARSAQ-learningɛ-greedy policy with ɛ 0.1

PART IIApproximation Methods inReinforcement LearningWeinan ZhangShanghai Jiao Tong Universityhttp://wnzhang.netOct 14 2018, SJTU

Review of What We have Learned Model-based dynamic programmingX Value iteration Policy iterationV (s) R(s) max Psa (s0 )V (s0 )a2AX s0 2S¼(s) arg maxPsa (s0 )V (s0 )a2As0 2S Model-free reinforcement learning On-policy MC On-policy TDV (st ) Ã V (st ) (Gt ¡ V (st ))V (st ) Ã V (st ) (rt 1 V (st 1 ) ¡ V (st )) On-policy TD SARSAQ(st ; at ) Ã Q(st ; at ) (rt 1 Q(st 1 ; at 1 ) ¡ Q(st ; at )) Off-policy TD Q-learningQ(st ; at ) Ã Q(st ; at ) (rt 1 maxQ(st 1 ; at 1 ) ¡ Q(st ; at ))0a

Key Problem to Solve in This Lecture In all previous models, we have created a lookuptable to maintain a variable V(s) for each state orQ(s,a) for each state-action What if we have a large MDP, i.e. the state or state-action space is too large or the state or action space is continuousto maintain V(s) for each state or Q(s,a) for eachstate-action? For example Game of Go (10170 states) Helicopter, autonomous car (continuous state space)

Introduction to Reinforcement Learning Model-based Reinforcement Learning Markov Decision Process Planning by Dynamic Programming Model-free Reinforcement Learning On-policy SARSA Off-policy Q-learning

Related Documents:

IEOR 8100: Reinforcement learning Lecture 1: Introduction By Shipra Agrawal 1 Introduction to reinforcement learning What is reinforcement learning? Reinforcement learning is characterized by an agent continuously interacting and learning from a stochastic environment. Imagine a robot movin

2.3 Deep Reinforcement Learning: Deep Q-Network 7 that the output computed is consistent with the training labels in the training set for a given image. [1] 2.3 Deep Reinforcement Learning: Deep Q-Network Deep Reinforcement Learning are implementations of Reinforcement Learning methods that use Deep Neural Networks to calculate the optimal policy.

In this section, we present related work and background concepts such as reinforcement learning and multi-objective reinforcement learning. 2.1 Reinforcement Learning A reinforcement learning (Sutton and Barto, 1998) environment is typically formalized by means of a Markov decision process (MDP). An MDP can be described as follows. Let S fs 1 .

learning techniques, such as reinforcement learning, in an attempt to build a more general solution. In the next section, we review the theory of reinforcement learning, and the current efforts on its use in other cooperative multi-agent domains. 3. Reinforcement Learning Reinforcement learning is often characterized as the

Peng, Peng, et al. "Multiagent bidirectionally-coordinated nets for learning to play starcraftcombat games."NIPS workshop 2017. Case 4: City Brain Simulation Designing Car routing . YaodongYang, Weinan Zhang et al. Mean Field Multi-Agent Reinforcement Learning. ICML 2018. Qj(s;a) 1 Nj X k2N(j) Qj(s;aj;ak) Neighboring agent set of j

Meta-reinforcement learning. Meta reinforcement learn-ing aims to solve a new reinforcement learning task by lever-aging the experience learned from a set of similar tasks. Currently, meta-reinforcement learning can be categorized into two different groups. The first group approaches (Duan et al. 2016; Wang et al. 2016; Mishra et al. 2018) use an

Reinforcement learning methods provide a framework that enables the design of learning policies for general networks. There have been two main lines of work on reinforcement learning methods: model-free reinforcement learning (e.g. Q-learning [4], policy gradient [5]) and model-based reinforce-ment learning (e.g., UCRL [6], PSRL [7]). In this .

Unit 2 Phonics and reading 1.Choose the picture that matches the vowel team word CSK 2.Complete the vowel team words E68 3.Complete the word with the correct vowel team HTK 4.Choose the vowel team sentence that matches the picture DJD 5.Choose the r-control word that matches the picture VVD 6.Complete the word with the correct r-controlled vowel: ar, er, ir, or, ur PLR 7.Complete the word with .