pdfcoffee
Chapter 6The images need to be normalized before we train the network. For betterperformance you can even add jitter to the images:def normalize(input_image, label):input_image = tf.cast(input_image, tf.float32)input_image = (input_image / 127.5) - 1return input_imageThe preceding function when applied to images will normalize them in the range[-1,1]. Let us apply this to our train and test datasets and create a data generator thatwill provide images for training in batches:train_A = train_A.map(normalize, num_parallel_calls=AUTOTUNE).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)train_B = train_B.map(normalize, num_parallel_calls=AUTOTUNE).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)test_A = test_A.map(normalize, num_parallel_calls=AUTOTUNE).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)test_B = test_B.map(normalize, num_parallel_calls=AUTOTUNE).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)In the preceding code the argument num_parallel_calls allows one to take benefitfrom multiple CPU cores in the system, one should set its value to the numberof CPU cores in your system. If you are not sure, use the AUTOTUNE = tf.data.experimental.AUTOTUNE value so that TensorFlow dynamically determines theright number for you.Before moving ahead with the model definition, let us see the images. Each image isprocessed before plotting so that its intensity is normal:inpA = next(iter(train_A))inpB = next(iter(train_B))plt.subplot(121)plt.title("Train Set A")plt.imshow(inpA[0]*0.5 + 0.5)plt.subplot(122)plt.title("Train Set B")plt.imshow(inpB[0]*0.5 + 0.5)[ 219 ]
Generative Adversarial NetworksTo construct the generator and discriminator we will require three sub modules: theupsampling layer, which will take in an image and perform a transpose convolutionoperation; a downsampling layer, which will perform the convention convolutionaloperation, and a residual layer so that we can have a sufficiently deep model. Theselayers are defined in the functions downsample(), upsample(), and class basedon the TensorFlow Keras Model API ResnetIdentityBlock. You can see the finerimplementation details of these functions in the GitHub repo notebook CycleGAN_TF2.ipynb.Let us now build our generator:def Generator():down_stack = [downsample(64, 4, apply_batchnorm=False),downsample(128, 4),downsample(256, 4),downsample(512, 4)]up_stack = [upsample(256, 4),upsample(128, 4),upsample(64, 4),][ 220 ]
- Page 203 and 204: Advanced Convolutional Neural Netwo
- Page 205 and 206: Advanced Convolutional Neural Netwo
- Page 207 and 208: Advanced Convolutional Neural Netwo
- Page 209 and 210: Advanced Convolutional Neural Netwo
- Page 211 and 212: Advanced Convolutional Neural Netwo
- Page 213 and 214: Advanced Convolutional Neural Netwo
- Page 215 and 216: Advanced Convolutional Neural Netwo
- Page 217 and 218: Advanced Convolutional Neural Netwo
- Page 219 and 220: Advanced Convolutional Neural Netwo
- Page 221 and 222: Advanced Convolutional Neural Netwo
- Page 223 and 224: Advanced Convolutional Neural Netwo
- Page 226 and 227: GenerativeAdversarial NetworksIn th
- Page 228 and 229: [ 193 ]Chapter 6Eventually, we reac
- Page 230 and 231: [ 195 ]Chapter 6Next, we combine th
- Page 232 and 233: Chapter 6And handwritten digits gen
- Page 234 and 235: Chapter 6Figure 1: Visualizing the
- Page 236 and 237: Chapter 6The resultant generator mo
- Page 238 and 239: Chapter 6Figure 4: A summary of res
- Page 240 and 241: Chapter 6def train(self, epochs, ba
- Page 242 and 243: Chapter 6The preceding images were
- Page 244 and 245: Chapter 6Another interesting paper
- Page 246 and 247: Chapter 6To elaborate, let us say t
- Page 248 and 249: Chapter 6Figure 7: The architecture
- Page 250 and 251: Chapter 6Figure 11: Illegible initi
- Page 252 and 253: Chapter 6Bedrooms: Generated bedroo
- Page 256 and 257: Chapter 6initializer = tf.random_no
- Page 258 and 259: Cool, right? Now we can define the
- Page 260 and 261: Chapter 6d_loss = (dA_loss + dB_los
- Page 262 and 263: Chapter 6generator_AB.save_weights(
- Page 264: 6. Ledig, Christian, et al. Photo-R
- Page 267 and 268: Word EmbeddingsDeep learning models
- Page 269 and 270: Word EmbeddingsFor example, "crucia
- Page 271 and 272: Word EmbeddingsAssuming a window si
- Page 273 and 274: Word EmbeddingsGloVeThe Global vect
- Page 275 and 276: Word Embeddingsgensim is an open so
- Page 277 and 278: Word Embeddingsgensim also provides
- Page 279 and 280: Word EmbeddingsSpecifically, we wil
- Page 281 and 282: Word EmbeddingsWe will also convert
- Page 283 and 284: Word EmbeddingsE = np.zeros((vocab_
- Page 285 and 286: Word Embeddingsx = self.embedding(x
- Page 287 and 288: Word EmbeddingsThe change in valida
- Page 289 and 290: Word EmbeddingsThe dataset is a 114
- Page 291 and 292: Word Embeddingsprint("random walks
- Page 293 and 294: Word Embeddingssize=128, # size of
- Page 295 and 296: Word EmbeddingsfastText computes em
- Page 297 and 298: Word EmbeddingsIn the future, once
- Page 299 and 300: Word EmbeddingsA much earlier relat
- Page 301 and 302: Word EmbeddingsOnce you have the fi
- Page 303 and 304: Word EmbeddingsThis will create the
Generative Adversarial Networks
To construct the generator and discriminator we will require three sub modules: the
upsampling layer, which will take in an image and perform a transpose convolution
operation; a downsampling layer, which will perform the convention convolutional
operation, and a residual layer so that we can have a sufficiently deep model. These
layers are defined in the functions downsample(), upsample(), and class based
on the TensorFlow Keras Model API ResnetIdentityBlock. You can see the finer
implementation details of these functions in the GitHub repo notebook CycleGAN_
TF2.ipynb.
Let us now build our generator:
def Generator():
down_stack = [
downsample(64, 4, apply_batchnorm=False),
downsample(128, 4),
downsample(256, 4),
downsample(512, 4)
]
up_stack = [
upsample(256, 4),
upsample(128, 4),
upsample(64, 4),
]
[ 220 ]