Advanced Deep Learning with Keras

fourpersent2020
from fourpersent2020 More from this publisher
16.03.2021 Views

Chapter 9Where:( ) ( , )∗V s maxQ s a= (Equation 9.2.2)aIn other words, instead of finding the policy that maximizes the value for all states,Equation 9.2.1 looks for the action that maximizes the quality (Q) value for all states.After finding the Q value function, V * and hence π ∗ are determined by Equation 9.2.2and 9.1.3 respectively.If for every action, the reward and the next state can be observed, we can formulatethe following iterative or trial and error algorithm to learn the Q value:( , ) γ ( ′,′)Q s a = r + maxQ s a (Equation 9.2.3)a′For notational simplicity, both s ' and a ' are the next state and action respectively.Equation 9.2.3 is known as the Bellman Equation which is the core of the Q-Learningalgorithm. Q-Learning attempts to approximate the first-order expansion of return orvalue (Equation 9.1.2) as a function of both current state and action.From zero knowledge of the dynamics of the environment, the agent tries an action a,observes what happens in the form of reward, r, and next state, s ' . max Q( s′ , a′ ) choosesa′the next logical action that will give the maximum Q value for the next state. Withall terms in Equation 9.2.3 known, the Q value for that current state-action pair isupdated. Doing the update iteratively will eventually learn the Q value function.Q-Learning is an off-policy RL algorithm. It learns to improve the policy by notdirectly sampling experiences from that policy. In other words, the Q values arelearned independently of the underlying policy being used by the agent. When theQ value function has converged, only then is the optimal policy determined usingEquation 9.2.1.Before giving an example on how to use Q-Learning, we should note that the agentmust continually explore its environment while gradually taking advantage ofwhat it has learned so far. This is one of the issues in RL – finding the right balancebetween Exploration and Exploitation. Generally, during the start of learning, theaction is random (exploration). As the learning progresses, the agent takes advantageof the Q value (exploitation). For example, at the start, 90% of the action is randomand 10% from Q value function, and by the end of each episode, this is graduallydecreased. Eventually, the action is 10% random and 90% from Q value function.[ 275 ]

Deep Reinforcement LearningQ-Learning exampleTo illustrate the Q-Learning algorithm, we need to consider a simple deterministicenvironment, as shown in the following figure. The environment has six states.The rewards for allowed transitions are shown. The reward is non-zero in two cases.Transition to the Goal (G) state has +100 reward while moving into Hole (H) statehas -100 reward. These two states are terminal states and constitute the end of oneepisode from the Start state:Figure 9.3.1: Rewards in a simple deterministic worldTo formalize the identity of each state, we need to use a (row, column) identifier asshown in the following figure. Since the agent has not learned anything yet about itsenvironment, the Q-Table also shown in the following figure has zero initial values.In this example, the discount factor, γ = 0.9 . Recall that in the estimate of current Qvalue, the discount factor determines the weight of future Q values as a function ofthe number of steps, . In Equation 9.2.3, we only consider the immediate futureQ value, k = 1:Figure 9.3.2: States in the simple deterministic environment and the agent's initial Q-Table[ 276 ]

Deep Reinforcement Learning

Q-Learning example

To illustrate the Q-Learning algorithm, we need to consider a simple deterministic

environment, as shown in the following figure. The environment has six states.

The rewards for allowed transitions are shown. The reward is non-zero in two cases.

Transition to the Goal (G) state has +100 reward while moving into Hole (H) state

has -100 reward. These two states are terminal states and constitute the end of one

episode from the Start state:

Figure 9.3.1: Rewards in a simple deterministic world

To formalize the identity of each state, we need to use a (row, column) identifier as

shown in the following figure. Since the agent has not learned anything yet about its

environment, the Q-Table also shown in the following figure has zero initial values.

In this example, the discount factor, γ = 0.9 . Recall that in the estimate of current Q

value, the discount factor determines the weight of future Q values as a function of

the number of steps, . In Equation 9.2.3, we only consider the immediate future

Q value, k = 1:

Figure 9.3.2: States in the simple deterministic environment and the agent's initial Q-Table

[ 276 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!