16.03.2021 Views

Advanced Deep Learning with Keras

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Disentangled Representation GANs

( )

StackedGAN ( D)

L =−E

log D f −E

log 1 −D G f , z

L

( ) + ( ( + ))

i fi~ pdata i fi 1~ pdata , zi

i 1 i

( G)

= −E

( ( f z ))

log D G ,

adv

i fi+ 1 pdata , zi

i+

1 i

6.2.1

6.2.2

L

( G)

( G ( ))

cond

|| , , ||

i

= E

fi 1 pdata , z

f

i i 1

zi f

+ ∼

+ i+

1 2

6.2.3

L

( G)

( G ( ))

= || E f , z , z ||

ent

i fi+ 1 , zi

i+

1 i i 2

6.2.4

( G) ( G) ( G) ( G)

L = λ L + λ L + λ L

adv cond ent

i 1 i 2 i 3 i

6.2.5

where λ1 , λ2, andλ are weights and

3

i = Encoder and GAN id

Table 6.2.1: A comparison between the loss functions of GAN and StackedGAN.

~p data means sampling from the corresponding encoder data (input, feature or output).

Given the Encoder inputs (x r

) intermediate features (f 1r

) and labels (y r

), each GAN

is trained in the usual discriminator–adversarial manner. The loss functions are

given by Equation 6.2.1 to 6.2.5 in Table 6.2.1. Equations 6.2.1 and 6.2.2 are the usual

loss functions of the generic GAN. StackedGAN has two additional loss functions,

Conditional and Entropy.

( ) cond

G

The conditional loss function, L

i in Equation 6.2.3, ensures that the generator does

not ignore the input, f i+1

, when synthesizing the output, f i

, from input noise code

z i

. The encoder, Encoder i

, must be able to recover the generator input by inverting

the process of the generator, Generator i

. The difference between the generator input

and the recovered input using the encoder is measured by L2 or Euclidean distance

Mean Squared Error (MSE). Figure 6.2.4 shows the network elements involved in

( G)

cond

the computation of L :

0

[ 184 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!