16.03.2021 Views

Advanced Deep Learning with Keras

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Chapter 5

This prevents the generator from having enough motivation to improve the quality

of the generated fake data. Fake samples far from the decision boundary will no

longer attempt to move closer to the true samples' distribution. Using the least

squares loss function, the gradients do not vanish as long as the fake samples

distribution is far from the real samples' distribution. The generator will strive

to improve its estimate of real density distribution even if the fake samples are

already on the correct side of the decision boundary:

Figure 5.2.1: Both real and fake samples distributions divided by respective decision boundaries:

Sigmoid and Least squares

Network Loss Functions Equation

( )

GAN ( D)

L =−E

x~ p

log D x −Ez

log 1−D G

L

( G)

data

=−E z

log D( G( z)

)

( ) ( ( z)

)

4.1.1

4.1.5

LSGAN ( D)

L = E D x − + E D G

L

( G)

( ( ) ) ( ( z)

)

2 2

x~ p

1

data

z

z

( D( G( z)

) 1) 2

= E −

5.2.1

5.2.2

Table 5.2.1: A comparison between the loss functions of GAN and LSGAN

[ 143 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!