Advanced Deep Learning with Keras
Chapter 9Where:( ) ( , )∗V s maxQ s a= (Equation 9.2.2)aIn other words, instead of finding the policy that maximizes the value for all states,Equation 9.2.1 looks for the action that maximizes the quality (Q) value for all states.After finding the Q value function, V * and hence π ∗ are determined by Equation 9.2.2and 9.1.3 respectively.If for every action, the reward and the next state can be observed, we can formulatethe following iterative or trial and error algorithm to learn the Q value:( , ) γ ( ′,′)Q s a = r + maxQ s a (Equation 9.2.3)a′For notational simplicity, both s ' and a ' are the next state and action respectively.Equation 9.2.3 is known as the Bellman Equation which is the core of the Q-Learningalgorithm. Q-Learning attempts to approximate the first-order expansion of return orvalue (Equation 9.1.2) as a function of both current state and action.From zero knowledge of the dynamics of the environment, the agent tries an action a,observes what happens in the form of reward, r, and next state, s ' . max Q( s′ , a′ ) choosesa′the next logical action that will give the maximum Q value for the next state. Withall terms in Equation 9.2.3 known, the Q value for that current state-action pair isupdated. Doing the update iteratively will eventually learn the Q value function.Q-Learning is an off-policy RL algorithm. It learns to improve the policy by notdirectly sampling experiences from that policy. In other words, the Q values arelearned independently of the underlying policy being used by the agent. When theQ value function has converged, only then is the optimal policy determined usingEquation 9.2.1.Before giving an example on how to use Q-Learning, we should note that the agentmust continually explore its environment while gradually taking advantage ofwhat it has learned so far. This is one of the issues in RL – finding the right balancebetween Exploration and Exploitation. Generally, during the start of learning, theaction is random (exploration). As the learning progresses, the agent takes advantageof the Q value (exploitation). For example, at the start, 90% of the action is randomand 10% from Q value function, and by the end of each episode, this is graduallydecreased. Eventually, the action is 10% random and 90% from Q value function.[ 275 ]
Deep Reinforcement LearningQ-Learning exampleTo illustrate the Q-Learning algorithm, we need to consider a simple deterministicenvironment, as shown in the following figure. The environment has six states.The rewards for allowed transitions are shown. The reward is non-zero in two cases.Transition to the Goal (G) state has +100 reward while moving into Hole (H) statehas -100 reward. These two states are terminal states and constitute the end of oneepisode from the Start state:Figure 9.3.1: Rewards in a simple deterministic worldTo formalize the identity of each state, we need to use a (row, column) identifier asshown in the following figure. Since the agent has not learned anything yet about itsenvironment, the Q-Table also shown in the following figure has zero initial values.In this example, the discount factor, γ = 0.9 . Recall that in the estimate of current Qvalue, the discount factor determines the weight of future Q values as a function ofthe number of steps, . In Equation 9.2.3, we only consider the immediate futureQ value, k = 1:Figure 9.3.2: States in the simple deterministic environment and the agent's initial Q-Table[ 276 ]
- Page 242 and 243: Chapter 7returndirs=dirs,show=True)
- Page 244 and 245: Chapter 7Figure 7.1.10: Color (from
- Page 246 and 247: [ 229 ]Chapter 7titles = ('MNIST pr
- Page 248 and 249: Chapter 7Figure 7.1.13: Style trans
- Page 250 and 251: Chapter 7Figure 7.1.15: The backwar
- Page 252: Chapter 7References1. Yuval Netzer
- Page 255 and 256: Variational Autoencoders (VAEs)In t
- Page 257 and 258: Variational Autoencoders (VAEs)Typi
- Page 259 and 260: Variational Autoencoders (VAEs)For
- Page 261 and 262: Variational Autoencoders (VAEs)VAEs
- Page 263 and 264: Variational Autoencoders (VAEs)outp
- Page 265 and 266: Variational Autoencoders (VAEs)Figu
- Page 267 and 268: Variational Autoencoders (VAEs)The
- Page 269 and 270: Variational Autoencoders (VAEs)Figu
- Page 271 and 272: Variational Autoencoders (VAEs)Prec
- Page 273 and 274: Variational Autoencoders (VAEs)shap
- Page 275 and 276: Variational Autoencoders (VAEs)cvae
- Page 277 and 278: Variational Autoencoders (VAEs)Figu
- Page 279 and 280: Variational Autoencoders (VAEs)Figu
- Page 281 and 282: Variational Autoencoders (VAEs)In F
- Page 283 and 284: Variational Autoencoders (VAEs)Figu
- Page 285 and 286: Variational Autoencoders (VAEs)The
- Page 288 and 289: Deep ReinforcementLearningReinforce
- Page 290 and 291: [ 273 ]Chapter 9Formally, the RL pr
- Page 294 and 295: Chapter 9Initially, the agent assum
- Page 296 and 297: Chapter 9Figure 9.3.6: Assuming the
- Page 298 and 299: Q-Learning in PythonThe environment
- Page 300 and 301: Chapter 9----------------"""self.re
- Page 302 and 303: Chapter 9# UI to dump Q Table conte
- Page 304 and 305: Chapter 9Figure 9.3.10: The value f
- Page 306 and 307: Chapter 9Figure 9.5.1: Frozen lake
- Page 308 and 309: Chapter 9# discount factorself.gamm
- Page 310 and 311: Chapter 9# training of Q Tableif do
- Page 312 and 313: Chapter 9Where all terms are famili
- Page 314 and 315: Listing 9.6.1 shows us the DQN impl
- Page 316 and 317: Chapter 9if self.ddqn:print("------
- Page 318 and 319: Chapter 9updates# correction on the
- Page 320 and 321: QmaxChapter 9⎧rj+1if episodetermi
- Page 322: References1. Sutton and Barto. Rein
- Page 325 and 326: Policy Gradient MethodsPolicy gradi
- Page 327 and 328: Policy Gradient MethodsGiven a cont
- Page 329 and 330: Policy Gradient MethodsRequire: Dis
- Page 331 and 332: Policy Gradient MethodsRequire: Dis
- Page 333 and 334: Policy Gradient MethodsRequire: Dis
- Page 335 and 336: Policy Gradient MethodsRequire: θ
- Page 337 and 338: Policy Gradient MethodsThe state is
- Page 339 and 340: Policy Gradient Methodsself.encoder
- Page 341 and 342: Policy Gradient MethodsThe policy n
Deep Reinforcement Learning
Q-Learning example
To illustrate the Q-Learning algorithm, we need to consider a simple deterministic
environment, as shown in the following figure. The environment has six states.
The rewards for allowed transitions are shown. The reward is non-zero in two cases.
Transition to the Goal (G) state has +100 reward while moving into Hole (H) state
has -100 reward. These two states are terminal states and constitute the end of one
episode from the Start state:
Figure 9.3.1: Rewards in a simple deterministic world
To formalize the identity of each state, we need to use a (row, column) identifier as
shown in the following figure. Since the agent has not learned anything yet about its
environment, the Q-Table also shown in the following figure has zero initial values.
In this example, the discount factor, γ = 0.9 . Recall that in the estimate of current Q
value, the discount factor determines the weight of future Q values as a function of
the number of steps, . In Equation 9.2.3, we only consider the immediate future
Q value, k = 1:
Figure 9.3.2: States in the simple deterministic environment and the agent's initial Q-Table
[ 276 ]