16.03.2021 Views

Advanced Deep Learning with Keras

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 8

# instantiate vae model

outputs = decoder([encoder([inputs, y_labels])[2], y_labels])

cvae = Model([inputs, y_labels], outputs, name='cvae')

if __name__ == '__main__':

parser = argparse.ArgumentParser()

help_ = "Load h5 model trained weights"

parser.add_argument("-w", "--weights", help=help_)

help_ = "Use mse loss instead of binary cross entropy (default)"

parser.add_argument("-m", "--mse", help=help_, action='store_

true')

help_ = "Specify a specific digit to generate"

parser.add_argument("-d", "--digit", type=int, help=help_)

help_ = "Beta in Beta-CVAE. Beta > 1. Default is 1.0 (CVAE)"

parser.add_argument("-b", "--beta", type=float, help=help_)

args = parser.parse_args()

models = (encoder, decoder)

data = (x_test, y_test)

if args.beta is None or args.beta < 1.0:

beta = 1.0

print("CVAE")

model_name = "cvae_cnn_mnist"

else:

beta = args.beta

print("Beta-CVAE with beta=", beta)

model_name = "beta-cvae_cnn_mnist"

# VAE loss = mse_loss or xent_loss + kl_loss

if args.mse:

reconstruction_loss = mse(K.flatten(inputs),

K.flatten(outputs))

else:

reconstruction_loss = binary_crossentropy(K.flatten(inputs),

K.flatten(outputs))

reconstruction_loss *= image_size * image_size

kl_loss = 1 + z_log_var - K.square(z_mean) - K.exp(z_log_var)

kl_loss = K.sum(kl_loss, axis=-1)

kl_loss *= -0.5 * beta

cvae_loss = K.mean(reconstruction_loss + kl_loss)

cvae.add_loss(cvae_loss)

cvae.compile(optimizer='rmsprop')

[ 257 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!