09.05.2023 Views

pdfcoffee

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 11

DQN to play a game of Atari

In the preceding section we used DQN to train for balancing the CartPole. It was a

simple problem, and thus we could solve it using a perceptron model. But imagine

if the environment state was just the CartPole visual as we humans see it. With raw

pixel values as the input state space, our previous DQN will not work. What we need

is a convolutional neural network. Next, we build one based on the seminal paper on

DQN, Playing Atari with Deep Reinforcement Learning.

Most of the code will be similar to the DQN for CartPole, but there will be significant

changes in the DQN network itself, and how we preprocess the state that we obtain

from the environment.

First, let us see the change in the way state space is processed. In the following

screenshot you can see one of the Atari games, Breakout:

Figure 4: A screenshot of the Atari game, Breakout

Now, if you see the image, not all of it contains relevant information: the top part

has redundant info about score, and bottom part has an unnecessary blank space,

and the image is colored. To reduce the burden on our model, it is best to remove the

unnecessary information, so we crop the image, convert it to grayscale, and make it a

square of size 84 × 84 (as in the paper). Here is the code to preprocess the input raw

pixels:

def preprocess_state(self, img):

img_temp = img[31:195] # Choose the important area of the image

img_temp = tf.image.rgb_to_grayscale(img_temp)

img_temp = tf.image.resize(img_temp, [self.IM_SIZE, self.IM_SIZE],

method=tf.image.ResizeMethod.NEAREST_

NEIGHBOR)

img_temp = tf.cast(img_temp, tf.float32)

return img_temp[:,:,0]

[ 427 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!