- Page 2 and 3:
Advanced Deep Learningwith KerasApp
- Page 4 and 5:
mapt.ioMapt is an online digital li
- Page 6 and 7:
I would like to thank my family, Ch
- Page 8 and 9:
Table of ContentsPrefaceVChapter 1:
- Page 10 and 11:
[ iii ]Table of ContentsChapter 7:
- Page 12 and 13:
[ v ]PrefaceIn recent years, deep l
- Page 14 and 15:
Chapter 5, Improved GANs, covers al
- Page 16 and 17:
def encoder_layer(inputs,filters=16
- Page 18 and 19:
Introducing Advanced DeepLearning w
- Page 20 and 21:
Chapter 1Installing Keras and Tenso
- Page 22 and 23:
Chapter 1• RNNs: Recurrent neural
- Page 24 and 25:
[ 7 ]Chapter 1In the preceding figu
- Page 26 and 27:
Chapter 1Figure 1.3.3: MLP MNIST di
- Page 28 and 29:
Chapter 1model.add(Activation('soft
- Page 30 and 31:
Chapter 1model.add(Activation('relu
- Page 32 and 33:
Chapter 1As an example, l2 weight r
- Page 34 and 35:
[ 17 ]Chapter 1How far the predicte
- Page 36 and 37:
Chapter 1Figure 1.3.8: Plot of a fu
- Page 38 and 39:
Chapter 1The highest test accuracy
- Page 40 and 41:
Chapter 1Figure 1.3.9: The graphica
- Page 42 and 43:
Chapter 1# image is processed as is
- Page 44 and 45:
Chapter 1The computation involved i
- Page 46 and 47:
Chapter 1Listing 1.4.2 shows a summ
- Page 48 and 49:
Chapter 164-64-64 RMSprop Dropout(0
- Page 50 and 51:
Chapter 1There are the two main dif
- Page 52 and 53:
Chapter 1Layers Optimizer Regulariz
- Page 54:
ConclusionThis chapter provided an
- Page 57 and 58:
Deep Neural NetworksWhile this chap
- Page 59 and 60:
Deep Neural Networks# reshape and n
- Page 61 and 62:
Deep Neural NetworksEverything else
- Page 63 and 64: Deep Neural Networksfrom keras.util
- Page 65 and 66: Deep Neural NetworksFigure 2.1.3: T
- Page 67 and 68: Deep Neural NetworksHence, the netw
- Page 69 and 70: Deep Neural NetworksGenerally speak
- Page 71 and 72: Deep Neural NetworksIn the dataset,
- Page 73 and 74: Deep Neural NetworksTransition Laye
- Page 75 and 76: Deep Neural NetworksThere are some
- Page 77 and 78: Deep Neural NetworksResNet v2 is al
- Page 79 and 80: Deep Neural Networks…if version =
- Page 81 and 82: Deep Neural NetworksTo prevent the
- Page 83 and 84: Deep Neural NetworksAverage Pooling
- Page 85 and 86: Deep Neural Networks# orig paper us
- Page 88 and 89: AutoencodersIn the previous chapter
- Page 90 and 91: Chapter 3The autoencoder has the te
- Page 92 and 93: Chapter 3Firstly, we're going to im
- Page 94 and 95: Chapter 3# reconstruct the inputout
- Page 96 and 97: Chapter 3Figure 3.2.2: The decoder
- Page 98 and 99: batch_size=32,model_name="autoencod
- Page 100 and 101: Chapter 3Figure 3.2.6: Digits gener
- Page 102 and 103: Chapter 3As shown in Figure 3.3.2,
- Page 104 and 105: Chapter 3image_size = x_train.shape
- Page 106 and 107: Chapter 3# Mean Square Error (MSE)
- Page 108 and 109: Chapter 3from keras.layers import R
- Page 110 and 111: Chapter 3# build the autoencoder mo
- Page 112 and 113: Chapter 3x_train,validation_data=(x
- Page 116 and 117: Generative AdversarialNetworks (GAN
- Page 118 and 119: Chapter 4Figure 4.1.1: The generato
- Page 120 and 121: Chapter 4Figure 4.1.3: Training the
- Page 122 and 123: Chapter 4The loss function simply m
- Page 124 and 125: After undergoing two Conv2DTranspos
- Page 126 and 127: Chapter 4As can be seen in Listing
- Page 128 and 129: Chapter 4discriminator.trainable =
- Page 130 and 131: Chapter 4# only the generator is tr
- Page 132 and 133: Chapter 4CGAN is similar to DCGAN e
- Page 134 and 135: The new loss function of the genera
- Page 136 and 137: Inputs are concatenated before Dens
- Page 138 and 139: [ 121 ]Chapter 4noise_class = np.ey
- Page 140 and 141: Chapter 4Figure 4.3.4: The fake ima
- Page 142 and 143: Improved GANsSince the introduction
- Page 144 and 145: Chapter 5DivergenceKullback-Leibler
- Page 146 and 147: ( D)∫xdata∫( ) log ( ) ( ) log
- Page 148 and 149: D ( p p ) E( )•Chapter 51 pdata(
- Page 150 and 151: Chapter 5WGAN ( D)L =− E D x + E
- Page 152 and 153: Chapter 5Lastly, the Lipschitz cons
- Page 154 and 155: Chapter 5x_train = x_train.astype('
- Page 156 and 157: Chapter 5# labels for real datareal
- Page 158 and 159: Chapter 5# save the model after tra
- Page 160 and 161: Chapter 5This prevents the generato
- Page 162 and 163: Chapter 5input_shape = (image_size,
- Page 164 and 165:
Chapter 5Auxiliary classifier GAN (
- Page 166 and 167:
Chapter 5Stack of LeakyReLU-Conv2D
- Page 168 and 169:
Chapter 5image)inputs (Layer): Inpu
- Page 170 and 171:
Chapter 5As shown in Listing 5.3.3,
- Page 172 and 173:
andChapter 5Alternately train discr
- Page 174 and 175:
[ 157 ]Chapter 5batch_size)]# label
- Page 176 and 177:
Chapter 5Figure 5.3.4: A side by si
- Page 178 and 179:
DisentangledRepresentation GANsAs w
- Page 180 and 181:
Chapter 6Following figure shows us
- Page 182 and 183:
Chapter 6For continuous codes of a
- Page 184 and 185:
Chapter 6Implementation of InfoGAN
- Page 186 and 187:
Chapter 6Listing 6.1.2, infogan-mni
- Page 188 and 189:
Chapter 6# call discriminator build
- Page 190 and 191:
# 2) categorical cross entropy imag
- Page 192 and 193:
Chapter 6# during trainingnoise_inp
- Page 194 and 195:
Chapter 6# plot generator images on
- Page 196 and 197:
Chapter 6Figure 6.1.7: The images g
- Page 198 and 199:
Chapter 6Implementation of StackedG
- Page 200 and 201:
Chapter 6# Argumentsinputs (Layers)
- Page 202 and 203:
Chapter 6Figure 6.2.4: A simpler ve
- Page 204 and 205:
Chapter 6Figure 6.2.7: A StackedGAN
- Page 206 and 207:
Chapter 6Figure 6.2.8: A StackedGAN
- Page 208 and 209:
Chapter 6lr = 2e-4decay = 6e-8input
- Page 210 and 211:
[ 193 ]Chapter 6enc1.trainable = Fa
- Page 212 and 213:
Chapter 6z_dim])z_dim])real_z1 = np
- Page 214 and 215:
Chapter 6if (i + 1) % save_interval
- Page 216 and 217:
Chapter 6Figure 6.2.10: Images gene
- Page 218:
Chapter 6Reference1. Xi Chen and ot
- Page 221 and 222:
Cross-Domain GANsIn this chapter, w
- Page 223 and 224:
Cross-Domain GANsThe main disadvant
- Page 225 and 226:
Cross-Domain GANsSimilar to other G
- Page 227 and 228:
Cross-Domain GANsFigure 7.1.5: The
- Page 229 and 230:
Cross-Domain GANsAs discussed in th
- Page 231 and 232:
Cross-Domain GANskernel_size=kernel
- Page 233 and 234:
Cross-Domain GANsWe should note tha
- Page 235 and 236:
Cross-Domain GANs# else use 1-dim o
- Page 237 and 238:
Cross-Domain GANskernel_size=kernel
- Page 239 and 240:
Cross-Domain GANsmodels (Models): S
- Page 241 and 242:
Cross-Domain GANsFinally, before we
- Page 243 and 244:
Cross-Domain GANsSince CycleGAN is
- Page 245 and 246:
Cross-Domain GANsWe introduce modul
- Page 247 and 248:
Cross-Domain GANsIn the case of Pat
- Page 249 and 250:
Cross-Domain GANsFigure 7.1.14: For
- Page 251 and 252:
Cross-Domain GANsFigure 7.1.14 show
- Page 254 and 255:
Variational Autoencoders(VAEs)Simil
- Page 256 and 257:
Chapter 8In other words, considerin
- Page 258 and 259:
Chapter 8Equation 8.1.10 is the cor
- Page 260 and 261:
Chapter 8The solution to this probl
- Page 262 and 263:
Chapter 8epsilon = K.random_normal(
- Page 264 and 265:
Chapter 8Figure 8.1.3: The encoder
- Page 266 and 267:
Chapter 8Figure 8.1.7: The digits g
- Page 268 and 269:
Chapter 8# instantiate encoder mode
- Page 270 and 271:
Chapter 8Figure 8.1.10: The VAE mod
- Page 272 and 273:
Chapter 8log Pθ ( x | c) − DKL (
- Page 274 and 275:
Chapter 8# instantiate vae modelout
- Page 276 and 277:
Chapter 8Figure 8.2.1: The encoder
- Page 278 and 279:
Chapter 8Implementing CVAE requires
- Page 280 and 281:
Chapter 8Figure 8.2.6: Digits 6 to
- Page 282 and 283:
It is straightforward to implement
- Page 284 and 285:
Chapter 8Figure 8.3.3: Digits 0 to
- Page 286:
5. I. Higgins, L. Matthey, A. Pal,
- Page 289 and 290:
Deep Reinforcement LearningIn summa
- Page 291 and 292:
Deep Reinforcement LearningReturn c
- Page 293 and 294:
Deep Reinforcement LearningQ-Learni
- Page 295 and 296:
Deep Reinforcement LearningFigure 9
- Page 297 and 298:
Deep Reinforcement LearningFigure 9
- Page 299 and 300:
Deep Reinforcement Learningself.col
- Page 301 and 302:
Deep Reinforcement Learning# termin
- Page 303 and 304:
Deep Reinforcement LearningThe perc
- Page 305 and 306:
Deep Reinforcement LearningPrevious
- Page 307 and 308:
Deep Reinforcement LearningThis wil
- Page 309 and 310:
Deep Reinforcement Learningq_value
- Page 311 and 312:
Deep Reinforcement LearningThe most
- Page 313 and 314:
Deep Reinforcement LearningA high c
- Page 315 and 316:
Deep Reinforcement Learningfrom ker
- Page 317 and 318:
Deep Reinforcement Learning# comput
- Page 319 and 320:
Deep Reinforcement Learning# store
- Page 321 and 322:
Deep Reinforcement LearningConclusi
- Page 324 and 325:
Policy Gradient MethodsIn the final
- Page 326 and 327:
Chapter 10( | , ) ( )π a s θ = so
- Page 328 and 329:
Chapter 10The gradient updates are
- Page 330 and 331:
REINFORCE with baseline methodChapt
- Page 332 and 333:
Chapter 10Actor-Critic methodIn REI
- Page 334 and 335:
Chapter 10Advantage Actor-Critic (A
- Page 336 and 337:
Chapter 10Figure 10.6.1 MountainCar
- Page 338 and 339:
Chapter 10Figure 10.6.4 Decoder mod
- Page 340 and 341:
Chapter 10Figure 10.6.5: Policy mod
- Page 342 and 343:
logp = Lambda(self.logp,output_shap
- Page 344 and 345:
mean, stddev = argsdist = tf.distri
- Page 346 and 347:
Chapter 10Similarly, the value loss
- Page 348 and 349:
Chapter 10next_value = self.value(n
- Page 350 and 351:
[ 333 ]Chapter 10The training strat
- Page 352 and 353:
Chapter 10Performance evaluation of
- Page 354 and 355:
Chapter 10Figure 10.7.4: The number
- Page 356 and 357:
Chapter 10Figure 10.7.8: The total
- Page 358:
Chapter 10ConclusionIn this chapter
- Page 361 and 362:
Other Books You May EnjoyDeep Learn
- Page 364 and 365:
IndexAaccuracy 17Actor-Critic (A2C)
- Page 366 and 367:
Actor-Critic method, advantages 317