- Page 2 and 3:
Advanced Deep Learningwith KerasApp
- Page 4 and 5: mapt.ioMapt is an online digital li
- Page 6 and 7: I would like to thank my family, Ch
- Page 8 and 9: Table of ContentsPrefaceVChapter 1:
- Page 10 and 11: [ iii ]Table of ContentsChapter 7:
- Page 12 and 13: [ v ]PrefaceIn recent years, deep l
- Page 14 and 15: Chapter 5, Improved GANs, covers al
- Page 16 and 17: def encoder_layer(inputs,filters=16
- Page 18 and 19: Introducing Advanced DeepLearning w
- Page 20 and 21: Chapter 1Installing Keras and Tenso
- Page 22 and 23: Chapter 1• RNNs: Recurrent neural
- Page 24 and 25: [ 7 ]Chapter 1In the preceding figu
- Page 26 and 27: Chapter 1Figure 1.3.3: MLP MNIST di
- Page 28 and 29: Chapter 1model.add(Activation('soft
- Page 30 and 31: Chapter 1model.add(Activation('relu
- Page 32 and 33: Chapter 1As an example, l2 weight r
- Page 34 and 35: [ 17 ]Chapter 1How far the predicte
- Page 36 and 37: Chapter 1Figure 1.3.8: Plot of a fu
- Page 38 and 39: Chapter 1The highest test accuracy
- Page 40 and 41: Chapter 1Figure 1.3.9: The graphica
- Page 42 and 43: Chapter 1# image is processed as is
- Page 44 and 45: Chapter 1The computation involved i
- Page 46 and 47: Chapter 1Listing 1.4.2 shows a summ
- Page 48 and 49: Chapter 164-64-64 RMSprop Dropout(0
- Page 50 and 51: Chapter 1There are the two main dif
- Page 52 and 53: Chapter 1Layers Optimizer Regulariz
- Page 56 and 57: Deep Neural NetworksIn this chapter
- Page 58 and 59: Chapter 2The Functional API is guid
- Page 60 and 61: Chapter 2optimizer='adam',metrics=[
- Page 62 and 63: Chapter 2Second, although both bran
- Page 64 and 65: Chapter 2padding='same',activation=
- Page 66 and 67: Chapter 2This concludes our look at
- Page 68 and 69: Chapter 2Figure 2.2.2: A comparison
- Page 70 and 71: The add operation can be implemente
- Page 72 and 73: Chapter 2The following listing show
- Page 74 and 75: Chapter 2if (depth - 2) % 6 != 0:ra
- Page 76 and 77: Chapter 2The complete code is avail
- Page 78 and 79: num_filters=num_filters_out,kernel_
- Page 80 and 81: For simplicity, in this figure, we'
- Page 82 and 83: However, before the feature maps ar
- Page 84 and 85: Chapter 2y = Conv2D(4 * growth_rate
- Page 86: Chapter 2References1. Kaiming He an
- Page 89 and 90: AutoencodersAs a result of the late
- Page 91 and 92: AutoencodersTo put the autoencoder
- Page 93 and 94: Autoencoders# build the autoencoder
- Page 95 and 96: Autoencodersplt.title('Input: 1st 2
- Page 97 and 98: AutoencodersAfter training the auto
- Page 99 and 100: Autoencodersj * digit_size: (j + 1)
- Page 101 and 102: AutoencodersAs can be seen in Figur
- Page 103 and 104: AutoencodersFigure 3.3.1 shows actu
- Page 105 and 106:
Autoencoders# the decoder back to (
- Page 107 and 108:
AutoencodersAs shown in Figure 3.4.
- Page 109 and 110:
Autoencoders# convert color train a
- Page 111 and 112:
Autoencodersactivation='sigmoid',pa
- Page 113 and 114:
AutoencodersThere are some noticeab
- Page 116 and 117:
Generative AdversarialNetworks (GAN
- Page 118 and 119:
Chapter 4Figure 4.1.1: The generato
- Page 120 and 121:
Chapter 4Figure 4.1.3: Training the
- Page 122 and 123:
Chapter 4The loss function simply m
- Page 124 and 125:
After undergoing two Conv2DTranspos
- Page 126 and 127:
Chapter 4As can be seen in Listing
- Page 128 and 129:
Chapter 4discriminator.trainable =
- Page 130 and 131:
Chapter 4# only the generator is tr
- Page 132 and 133:
Chapter 4CGAN is similar to DCGAN e
- Page 134 and 135:
The new loss function of the genera
- Page 136 and 137:
Inputs are concatenated before Dens
- Page 138 and 139:
[ 121 ]Chapter 4noise_class = np.ey
- Page 140 and 141:
Chapter 4Figure 4.3.4: The fake ima
- Page 142 and 143:
Improved GANsSince the introduction
- Page 144 and 145:
Chapter 5DivergenceKullback-Leibler
- Page 146 and 147:
( D)∫xdata∫( ) log ( ) ( ) log
- Page 148 and 149:
D ( p p ) E( )•Chapter 51 pdata(
- Page 150 and 151:
Chapter 5WGAN ( D)L =− E D x + E
- Page 152 and 153:
Chapter 5Lastly, the Lipschitz cons
- Page 154 and 155:
Chapter 5x_train = x_train.astype('
- Page 156 and 157:
Chapter 5# labels for real datareal
- Page 158 and 159:
Chapter 5# save the model after tra
- Page 160 and 161:
Chapter 5This prevents the generato
- Page 162 and 163:
Chapter 5input_shape = (image_size,
- Page 164 and 165:
Chapter 5Auxiliary classifier GAN (
- Page 166 and 167:
Chapter 5Stack of LeakyReLU-Conv2D
- Page 168 and 169:
Chapter 5image)inputs (Layer): Inpu
- Page 170 and 171:
Chapter 5As shown in Listing 5.3.3,
- Page 172 and 173:
andChapter 5Alternately train discr
- Page 174 and 175:
[ 157 ]Chapter 5batch_size)]# label
- Page 176 and 177:
Chapter 5Figure 5.3.4: A side by si
- Page 178 and 179:
DisentangledRepresentation GANsAs w
- Page 180 and 181:
Chapter 6Following figure shows us
- Page 182 and 183:
Chapter 6For continuous codes of a
- Page 184 and 185:
Chapter 6Implementation of InfoGAN
- Page 186 and 187:
Chapter 6Listing 6.1.2, infogan-mni
- Page 188 and 189:
Chapter 6# call discriminator build
- Page 190 and 191:
# 2) categorical cross entropy imag
- Page 192 and 193:
Chapter 6# during trainingnoise_inp
- Page 194 and 195:
Chapter 6# plot generator images on
- Page 196 and 197:
Chapter 6Figure 6.1.7: The images g
- Page 198 and 199:
Chapter 6Implementation of StackedG
- Page 200 and 201:
Chapter 6# Argumentsinputs (Layers)
- Page 202 and 203:
Chapter 6Figure 6.2.4: A simpler ve
- Page 204 and 205:
Chapter 6Figure 6.2.7: A StackedGAN
- Page 206 and 207:
Chapter 6Figure 6.2.8: A StackedGAN
- Page 208 and 209:
Chapter 6lr = 2e-4decay = 6e-8input
- Page 210 and 211:
[ 193 ]Chapter 6enc1.trainable = Fa
- Page 212 and 213:
Chapter 6z_dim])z_dim])real_z1 = np
- Page 214 and 215:
Chapter 6if (i + 1) % save_interval
- Page 216 and 217:
Chapter 6Figure 6.2.10: Images gene
- Page 218:
Chapter 6Reference1. Xi Chen and ot
- Page 221 and 222:
Cross-Domain GANsIn this chapter, w
- Page 223 and 224:
Cross-Domain GANsThe main disadvant
- Page 225 and 226:
Cross-Domain GANsSimilar to other G
- Page 227 and 228:
Cross-Domain GANsFigure 7.1.5: The
- Page 229 and 230:
Cross-Domain GANsAs discussed in th
- Page 231 and 232:
Cross-Domain GANskernel_size=kernel
- Page 233 and 234:
Cross-Domain GANsWe should note tha
- Page 235 and 236:
Cross-Domain GANs# else use 1-dim o
- Page 237 and 238:
Cross-Domain GANskernel_size=kernel
- Page 239 and 240:
Cross-Domain GANsmodels (Models): S
- Page 241 and 242:
Cross-Domain GANsFinally, before we
- Page 243 and 244:
Cross-Domain GANsSince CycleGAN is
- Page 245 and 246:
Cross-Domain GANsWe introduce modul
- Page 247 and 248:
Cross-Domain GANsIn the case of Pat
- Page 249 and 250:
Cross-Domain GANsFigure 7.1.14: For
- Page 251 and 252:
Cross-Domain GANsFigure 7.1.14 show
- Page 254 and 255:
Variational Autoencoders(VAEs)Simil
- Page 256 and 257:
Chapter 8In other words, considerin
- Page 258 and 259:
Chapter 8Equation 8.1.10 is the cor
- Page 260 and 261:
Chapter 8The solution to this probl
- Page 262 and 263:
Chapter 8epsilon = K.random_normal(
- Page 264 and 265:
Chapter 8Figure 8.1.3: The encoder
- Page 266 and 267:
Chapter 8Figure 8.1.7: The digits g
- Page 268 and 269:
Chapter 8# instantiate encoder mode
- Page 270 and 271:
Chapter 8Figure 8.1.10: The VAE mod
- Page 272 and 273:
Chapter 8log Pθ ( x | c) − DKL (
- Page 274 and 275:
Chapter 8# instantiate vae modelout
- Page 276 and 277:
Chapter 8Figure 8.2.1: The encoder
- Page 278 and 279:
Chapter 8Implementing CVAE requires
- Page 280 and 281:
Chapter 8Figure 8.2.6: Digits 6 to
- Page 282 and 283:
It is straightforward to implement
- Page 284 and 285:
Chapter 8Figure 8.3.3: Digits 0 to
- Page 286:
5. I. Higgins, L. Matthey, A. Pal,
- Page 289 and 290:
Deep Reinforcement LearningIn summa
- Page 291 and 292:
Deep Reinforcement LearningReturn c
- Page 293 and 294:
Deep Reinforcement LearningQ-Learni
- Page 295 and 296:
Deep Reinforcement LearningFigure 9
- Page 297 and 298:
Deep Reinforcement LearningFigure 9
- Page 299 and 300:
Deep Reinforcement Learningself.col
- Page 301 and 302:
Deep Reinforcement Learning# termin
- Page 303 and 304:
Deep Reinforcement LearningThe perc
- Page 305 and 306:
Deep Reinforcement LearningPrevious
- Page 307 and 308:
Deep Reinforcement LearningThis wil
- Page 309 and 310:
Deep Reinforcement Learningq_value
- Page 311 and 312:
Deep Reinforcement LearningThe most
- Page 313 and 314:
Deep Reinforcement LearningA high c
- Page 315 and 316:
Deep Reinforcement Learningfrom ker
- Page 317 and 318:
Deep Reinforcement Learning# comput
- Page 319 and 320:
Deep Reinforcement Learning# store
- Page 321 and 322:
Deep Reinforcement LearningConclusi
- Page 324 and 325:
Policy Gradient MethodsIn the final
- Page 326 and 327:
Chapter 10( | , ) ( )π a s θ = so
- Page 328 and 329:
Chapter 10The gradient updates are
- Page 330 and 331:
REINFORCE with baseline methodChapt
- Page 332 and 333:
Chapter 10Actor-Critic methodIn REI
- Page 334 and 335:
Chapter 10Advantage Actor-Critic (A
- Page 336 and 337:
Chapter 10Figure 10.6.1 MountainCar
- Page 338 and 339:
Chapter 10Figure 10.6.4 Decoder mod
- Page 340 and 341:
Chapter 10Figure 10.6.5: Policy mod
- Page 342 and 343:
logp = Lambda(self.logp,output_shap
- Page 344 and 345:
mean, stddev = argsdist = tf.distri
- Page 346 and 347:
Chapter 10Similarly, the value loss
- Page 348 and 349:
Chapter 10next_value = self.value(n
- Page 350 and 351:
[ 333 ]Chapter 10The training strat
- Page 352 and 353:
Chapter 10Performance evaluation of
- Page 354 and 355:
Chapter 10Figure 10.7.4: The number
- Page 356 and 357:
Chapter 10Figure 10.7.8: The total
- Page 358:
Chapter 10ConclusionIn this chapter
- Page 361 and 362:
Other Books You May EnjoyDeep Learn
- Page 364 and 365:
IndexAaccuracy 17Actor-Critic (A2C)
- Page 366 and 367:
Actor-Critic method, advantages 317