16.03.2021 Views

Advanced Deep Learning with Keras

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Cross-Domain GANs

Similar to other GANs, the ultimate objective of CycleGAN is for the generator

G to learn how to synthesize fake target data, y ' , that can fool the discriminator,

D y

, in the forward cycle. Since the network is symmetric, CycleGAN also wants

the generator F to learn how to synthesize fake source data, x ' , that can fool the

discriminator, D x

, in the backward cycle. Inspired by the better perceptual quality

of Least Squares GAN (LSGAN) [5], as described in Chapter 5, Improved GANs,

CycleGAN also uses MSE for the discriminator and generator losses. Recall that the

difference of LSGAN from the original GAN is that the use of the MSE loss instead

of a binary cross-entropy loss. CycleGAN expresses the generator-discriminator loss

functions as:

( ) ( ) ( ( ))

( D)

2 2

L

forward− GAN

= E

y~ p

Dy y − 1 + E D

y x~

p x y

G x (Equation 7.1.7)

data

( )

( )

data

( ) 2

( )

D ( G

x

( x)

)

( G)

L

forward GAN

Ex~ p y

1 (Equation 7.1.8)

= −

data

( ) ( ( ) ) ( ) ( ( ))

( D)

2 2

L

backward− GAN

= Ex~ p

Dx x − 1 + E

x

y~

p

D

y x

F y (Equation 7.1.9)

data

data

( ) 2

( )

D ( F

y

( y)

)

( G)

L

backward GAN

E

y~ p x

1 (Equation 7.1.10)

= −

data

( D) ( D) ( D)

GAN

=

forward−GAN +

backward−GAN

L L L (Equation 7.1.11)

( D) ( D) ( D)

GAN

=

forward−GAN +

backward−GAN

L L L (Equation 7.1.12)

The total loss of CycleGAN is shown as:

L = λ L + λ L (Equation 7.1.13)

1 GAN 2

CycleGAN recommends the following weight values: λ

1

= 1.0 and λ

2

= 10.0 to give

more importance to the cyclic consistency check.

The training strategy is similar to the vanilla GAN. Algorithm 7.1.1 summarizes the

CycleGAN training procedure.

cyc

[ 208 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!