16.03.2021 Views

Advanced Deep Learning with Keras

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Deep Reinforcement Learning

In summary, the goal of this chapter is to present:

• The principles of RL

• The Reinforcement Learning technique, Q-Learning

• Advanced topics including Deep Q-Network (DQN),

and Double Q-Learning (DDQN)

• Instructions on how to implement RL on Python and DRL within Keras

Principles of reinforcement learning (RL)

Figure 9.1.1 shows the perception-action-learning loop that is used to describe RL.

The environment is a soda can sitting on the floor. The agent is a mobile robot whose

goal is to pick up the soda can. It observes the environment around it and tracks the

location of the soda can through an onboard camera. The observation is summarized

in a form of state which the robot will use to decide which action to take. The actions

it takes may pertain to low-level control such as the rotation angle/speed of each

wheel, rotation angle/speed of each joint of the arm, and whether the gripper is

open or close.

Alternatively, the actions may be high-level control moves such as moving the robot

forward/backward, steering with a certain angle, and grab/release. Any action that

moves the gripper away from the soda receives a negative reward. Any action that

closes the gap between the gripper location and the soda receives a positive reward.

When the robot arm successfully picks up the soda can, it receives a big positive

reward. The goal of RL is to learn the optimal policy that helps the robot to decide

which action to take given a state to maximize the accumulated discounted reward:

Figure 9.1.1: The perception-action-learning loop in reinforcement learning

[ 272 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!