16.03.2021 Views

Advanced Deep Learning with Keras

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Disentangled Representation GANs

StackedGAN starts with an Encoder. It could be a trained classifier that predicts

the correct labels. The intermediate features vector, f 1r

, is made available for

GAN training. For MNIST, we can use a CNN-based classifier similar to what

we discussed in Chapter 1, Introducing Advanced Deep Learning with Keras. Following

figure shows the Encoder and its network model implementation in Keras:

Figure 6.2.3: The encoder in StackedGAN is a simple CNN-based classifier

Listing 6.2.1 shows the Keras code for preceding figure. It is similar to the CNNbased

classifier in Chapter 1, Introducing Advanced Deep Learning with Keras except

that we use a Dense layer to extract the 256-dim feature. There are two output

models, Encoder 0

and Encoder 1

. Both will be used to train the StackedGAN.

The Encoder 0

output, f 1r

, is the 256-dim feature vector that we want Generator 1

to learn

to synthesize. It is available as an auxiliary output of Encoder 0

, E 0

. The overall Encoder

is trained to classify MNIST digits, x r

. The correct labels, y r

, are predicted by Encoder 1

,

E 1

. In the process, the intermediate set of features, f 1r

, is learned and made available

for Generator 0

training. Subscript r is used to emphasize and distinguish real data

from fake data when the GAN is trained against this encoder.

Listing 6.2.1, stackedgan-mnist-6.2.1.py shows encoder implementation in Keras:

def build_encoder(inputs, num_labels=10, feature1_dim=256):

""" Build the Classifier (Encoder) Model sub networks

Two sub networks:

1) Encoder0: Image to feature1 (intermediate latent feature)

2) Encoder1: feature1 to labels

[ 182 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!