09.05.2023 Views

pdfcoffee

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Generative Adversarial Networks

Now using the preceding defined generator and discriminator, we construct the

CycleGAN:

discriminator_A = Discriminator()

discriminator_B = Discriminator()

generator_AB = Generator()

generator_BA = Generator()

We next define the loss and optimizers:

loss_object = tf.keras.losses.BinaryCrossentropy(from_logits=True)

@tf.function

def discriminator_loss(disc_real_output, disc_generated_output):

real_loss = loss_object(tf.ones_like(disc_real_output), disc_real_

output)

generated_loss = loss_object(tf.zeros_like(disc_generated_output),

disc_generated_output)

total_disc_loss = real_loss + generated_loss

return total_disc_loss

optimizer = tf.keras.optimizers.Adam(1e-4, beta_1=0.5)

discriminator_optimizer = tf.keras.optimizers.Adam(1e-4, beta_1=0.5)

We create placeholders for the labels of real and fake images:

valid = np.ones((BATCH_SIZE, 16, 16, 1)).astype('float32')

fake = np.zeros((BATCH_SIZE, 16, 16, 1)).astype('float32')

Now we define the function that trains the generator and discriminator in a batch,

a pair of images at a time. The two discriminators and the two generators are trained

via this function with the help of the tape gradient:

@tf.function

def train_batch(imgs_A, imgs_B):

with tf.GradientTape() as g, tf.GradientTape() as d_tape:

fake_B = generator_AB(imgs_A, training=True)

fake_A = generator_BA(imgs_B, training=True)

logits_real_A = discriminator_A(imgs_A, training=True)

logits_fake_A = discriminator_A(fake_A, training=True)

dA_loss = discriminator_loss(logits_real_A, logits_fake_A)

logits_real_B = discriminator_B(imgs_B, training=True)

logits_fake_B = discriminator_B(fake_B, training=True)

dB_loss = discriminator_loss(logits_real_B, logits_fake_B)

[ 224 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!