Advanced Deep Learning with Keras
Chapter 8log Pθ ( x | c) − DKL ( Qφ ( z | x, c) || Pθ ( z | x, c)) = ⎡log P ( x | z, c) ⎤⎣ θ ⎦− DKL( Qφ ( z | x, c) || Pθ( z | c)) (Equation 8.2.1)z~QSimilar to VAEs, Equation 8.2.1 means that if we want to maximize the outputconditioned on c, P ( x | c), then the two loss terms must be minimized:θ• Reconstruction loss of the decoder given both the latent vector and thecondition.• KL loss between the encoder given both the latent vector and the conditionand the prior distribution given the condition. Similar to a VAE, we typicallychoose Pθ ( z | c) = P( z | c) = N ( 0, I ).Listing 8.2.1, cvae-cnn-mnist-8.2.1.py shows us the Keras code of CVAE usingCNN layers. In the code that is highlighted showcases the changes made to supportCVAE:# compute the number of labelsnum_labels = len(np.unique(y_train))# network parametersinput_shape = (image_size, image_size, 1)label_shape = (num_labels, )batch_size = 128kernel_size = 3filters = 16latent_dim = 2epochs = 30# VAE model = encoder + decoder# build encoder modelinputs = Input(shape=input_shape, name='encoder_input')y_labels = Input(shape=label_shape, name='class_labels')x = Dense(image_size * image_size)(y_labels)x = Reshape((image_size, image_size, 1))(x)x = keras.layers.concatenate([inputs, x])for i in range(2):filters *= 2x = Conv2D(filters=filters,kernel_size=kernel_size,activation='relu',strides=2,padding='same')(x)# shape info needed to build decoder model[ 255 ]
Variational Autoencoders (VAEs)shape = K.int_shape(x)# generate latent vector Q(z|X)x = Flatten()(x)x = Dense(16, activation='relu')(x)z_mean = Dense(latent_dim, name='z_mean')(x)z_log_var = Dense(latent_dim, name='z_log_var')(x)# use reparameterization trick to push the sampling out as input# note that "output_shape" isn't necessary with the TensorFlow backendz = Lambda(sampling, output_shape=(latent_dim,), name='z')([z_mean,z_log_var])# instantiate encoder modelencoder = Model([inputs, y_labels], [z_mean, z_log_var, z],name='encoder')encoder.summary()plot_model(encoder, to_file='cvae_cnn_encoder.png', show_shapes=True)# build decoder modellatent_inputs = Input(shape=(latent_dim,), name='z_sampling')x = keras.layers.concatenate([latent_inputs, y_labels])x = Dense(shape[1]*shape[2]*shape[3], activation='relu')(x)x = Reshape((shape[1], shape[2], shape[3]))(x)for i in range(2):x = Conv2DTranspose(filters=filters,kernel_size=kernel_size,activation='relu',strides=2,padding='same')(x)filters //= 2outputs = Conv2DTranspose(filters=1,kernel_size=kernel_size,activation='sigmoid',padding='same',name='decoder_output')(x)# instantiate decoder modeldecoder = Model([latent_inputs, y_labels], outputs, name='decoder')decoder.summary()plot_model(decoder, to_file='cvae_cnn_decoder.png', show_shapes=True)[ 256 ]
- Page 222 and 223: Chapter 7There are many more exampl
- Page 224 and 225: The CycleGAN ModelFigure 7.1.3 show
- Page 226 and 227: Chapter 7Repeat for n training step
- Page 228 and 229: Chapter 7Implementing CycleGAN usin
- Page 230 and 231: filters=16,kernel_size=3,strides=2,
- Page 232 and 233: Chapter 7kernel_size=kernel_size)e3
- Page 234 and 235: Listing 7.1.3, cyclegan-7.1.1.py sh
- Page 236 and 237: Chapter 71) Build target and source
- Page 238 and 239: Chapter 7preal_target,reco_source,r
- Page 240 and 241: size=batch_size)real_source = sourc
- Page 242 and 243: Chapter 7returndirs=dirs,show=True)
- Page 244 and 245: Chapter 7Figure 7.1.10: Color (from
- Page 246 and 247: [ 229 ]Chapter 7titles = ('MNIST pr
- Page 248 and 249: Chapter 7Figure 7.1.13: Style trans
- Page 250 and 251: Chapter 7Figure 7.1.15: The backwar
- Page 252: Chapter 7References1. Yuval Netzer
- Page 255 and 256: Variational Autoencoders (VAEs)In t
- Page 257 and 258: Variational Autoencoders (VAEs)Typi
- Page 259 and 260: Variational Autoencoders (VAEs)For
- Page 261 and 262: Variational Autoencoders (VAEs)VAEs
- Page 263 and 264: Variational Autoencoders (VAEs)outp
- Page 265 and 266: Variational Autoencoders (VAEs)Figu
- Page 267 and 268: Variational Autoencoders (VAEs)The
- Page 269 and 270: Variational Autoencoders (VAEs)Figu
- Page 271: Variational Autoencoders (VAEs)Prec
- Page 275 and 276: Variational Autoencoders (VAEs)cvae
- Page 277 and 278: Variational Autoencoders (VAEs)Figu
- Page 279 and 280: Variational Autoencoders (VAEs)Figu
- Page 281 and 282: Variational Autoencoders (VAEs)In F
- Page 283 and 284: Variational Autoencoders (VAEs)Figu
- Page 285 and 286: Variational Autoencoders (VAEs)The
- Page 288 and 289: Deep ReinforcementLearningReinforce
- Page 290 and 291: [ 273 ]Chapter 9Formally, the RL pr
- Page 292 and 293: Chapter 9Where:( ) ( , )∗V s maxQ
- Page 294 and 295: Chapter 9Initially, the agent assum
- Page 296 and 297: Chapter 9Figure 9.3.6: Assuming the
- Page 298 and 299: Q-Learning in PythonThe environment
- Page 300 and 301: Chapter 9----------------"""self.re
- Page 302 and 303: Chapter 9# UI to dump Q Table conte
- Page 304 and 305: Chapter 9Figure 9.3.10: The value f
- Page 306 and 307: Chapter 9Figure 9.5.1: Frozen lake
- Page 308 and 309: Chapter 9# discount factorself.gamm
- Page 310 and 311: Chapter 9# training of Q Tableif do
- Page 312 and 313: Chapter 9Where all terms are famili
- Page 314 and 315: Listing 9.6.1 shows us the DQN impl
- Page 316 and 317: Chapter 9if self.ddqn:print("------
- Page 318 and 319: Chapter 9updates# correction on the
- Page 320 and 321: QmaxChapter 9⎧rj+1if episodetermi
Chapter 8
log Pθ ( x | c) − DKL ( Qφ ( z | x, c) || Pθ ( z | x, c)
) = ⎡log P ( x | z, c) ⎤
⎣ θ ⎦
− DKL
( Qφ ( z | x, c) || Pθ
( z | c)
) (Equation 8.2.1)
z~
Q
Similar to VAEs, Equation 8.2.1 means that if we want to maximize the output
conditioned on c, P ( x | c)
, then the two loss terms must be minimized:
θ
• Reconstruction loss of the decoder given both the latent vector and the
condition.
• KL loss between the encoder given both the latent vector and the condition
and the prior distribution given the condition. Similar to a VAE, we typically
choose Pθ ( z | c) = P( z | c) = N ( 0, I ).
Listing 8.2.1, cvae-cnn-mnist-8.2.1.py shows us the Keras code of CVAE using
CNN layers. In the code that is highlighted showcases the changes made to support
CVAE:
# compute the number of labels
num_labels = len(np.unique(y_train))
# network parameters
input_shape = (image_size, image_size, 1)
label_shape = (num_labels, )
batch_size = 128
kernel_size = 3
filters = 16
latent_dim = 2
epochs = 30
# VAE model = encoder + decoder
# build encoder model
inputs = Input(shape=input_shape, name='encoder_input')
y_labels = Input(shape=label_shape, name='class_labels')
x = Dense(image_size * image_size)(y_labels)
x = Reshape((image_size, image_size, 1))(x)
x = keras.layers.concatenate([inputs, x])
for i in range(2):
filters *= 2
x = Conv2D(filters=filters,
kernel_size=kernel_size,
activation='relu',
strides=2,
padding='same')(x)
# shape info needed to build decoder model
[ 255 ]