16.03.2021 Views

Advanced Deep Learning with Keras

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

The CycleGAN Model

Figure 7.1.3 shows the network model of the CycleGAN. The objective of

the CycleGAN is to learn the function:

Chapter 7

y' = G(x) (Equation 7.1.1)

That generates fake images, y ' , in the target domain as a function of the real source

image, x. Learning is unsupervised by capitalizing only on the available real images,

x, in the source domain and real images, y, in the target domain.

Unlike regular GANs, CycleGAN imposes the cycle-consistency constraint.

The forward cycle-consistency network ensures that the real source data can

be reconstructed from the fake target data:

x' = F(G(x)) (Equation 7.1.2)

This is done by minimizing the forward cycle-consistency L1 loss:

forward−cyc = E ⎡

x~

p ( )

F ( G( x)

) − x ⎤

data x ⎢⎣

1⎥⎦

L (Equation 7.1.3)

The network is symmetric. The backward cycle-consistency network also attempts

to reconstruct the real target data from the fake source data:

y ' = G(F(y)) (Equation 7.1.4)

This is done by minimizing the backward cycle-consistency L1 loss:

( ) ( ( ))

L

backward−cyc = E

y~

p

G F y − y (Equation 7.1.5)

data y ⎢

1⎥

The sum of these two losses is known as cycle-consistency loss:

L = L + L

cyc forward−cyc backward−cyc

⎡ ⎤ ⎡ ⎤

⎣ ⎦ ⎣ ⎦

( ) ( ( )) E

( ) ( ( ))

L

cyc

= Ex~ p

F G x − x + G F y − y (Equation 7.1.6)

data x ⎢ 1⎥ y~

pdata

y ⎢ 1⎥

The cycle-consistency loss uses L1 or Mean Absolute Error (MAE) since it

generally results in less blurry image reconstruction compared to L2 or Mean

Square Error (MSE).

[ 207 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!