pdfcoffee

soumyasankar99
from soumyasankar99 More from this publisher
09.05.2023 Views

Chapter 6The images need to be normalized before we train the network. For betterperformance you can even add jitter to the images:def normalize(input_image, label):input_image = tf.cast(input_image, tf.float32)input_image = (input_image / 127.5) - 1return input_imageThe preceding function when applied to images will normalize them in the range[-1,1]. Let us apply this to our train and test datasets and create a data generator thatwill provide images for training in batches:train_A = train_A.map(normalize, num_parallel_calls=AUTOTUNE).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)train_B = train_B.map(normalize, num_parallel_calls=AUTOTUNE).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)test_A = test_A.map(normalize, num_parallel_calls=AUTOTUNE).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)test_B = test_B.map(normalize, num_parallel_calls=AUTOTUNE).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)In the preceding code the argument num_parallel_calls allows one to take benefitfrom multiple CPU cores in the system, one should set its value to the numberof CPU cores in your system. If you are not sure, use the AUTOTUNE = tf.data.experimental.AUTOTUNE value so that TensorFlow dynamically determines theright number for you.Before moving ahead with the model definition, let us see the images. Each image isprocessed before plotting so that its intensity is normal:inpA = next(iter(train_A))inpB = next(iter(train_B))plt.subplot(121)plt.title("Train Set A")plt.imshow(inpA[0]*0.5 + 0.5)plt.subplot(122)plt.title("Train Set B")plt.imshow(inpB[0]*0.5 + 0.5)[ 219 ]

Generative Adversarial NetworksTo construct the generator and discriminator we will require three sub modules: theupsampling layer, which will take in an image and perform a transpose convolutionoperation; a downsampling layer, which will perform the convention convolutionaloperation, and a residual layer so that we can have a sufficiently deep model. Theselayers are defined in the functions downsample(), upsample(), and class basedon the TensorFlow Keras Model API ResnetIdentityBlock. You can see the finerimplementation details of these functions in the GitHub repo notebook CycleGAN_TF2.ipynb.Let us now build our generator:def Generator():down_stack = [downsample(64, 4, apply_batchnorm=False),downsample(128, 4),downsample(256, 4),downsample(512, 4)]up_stack = [upsample(256, 4),upsample(128, 4),upsample(64, 4),][ 220 ]

Generative Adversarial Networks

To construct the generator and discriminator we will require three sub modules: the

upsampling layer, which will take in an image and perform a transpose convolution

operation; a downsampling layer, which will perform the convention convolutional

operation, and a residual layer so that we can have a sufficiently deep model. These

layers are defined in the functions downsample(), upsample(), and class based

on the TensorFlow Keras Model API ResnetIdentityBlock. You can see the finer

implementation details of these functions in the GitHub repo notebook CycleGAN_

TF2.ipynb.

Let us now build our generator:

def Generator():

down_stack = [

downsample(64, 4, apply_batchnorm=False),

downsample(128, 4),

downsample(256, 4),

downsample(512, 4)

]

up_stack = [

upsample(256, 4),

upsample(128, 4),

upsample(64, 4),

]

[ 220 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!