- Page 2 and 3: Deep Learning withTensorFlow 2 and
- Page 4 and 5: packt.comSubscribe to our online di
- Page 6 and 7: I want to thank my kids, Aurora, Le
- Page 8 and 9: Sujit Pal is a Technology Research
- Page 10 and 11: Table of ContentsPrefacexiChapter 1
- Page 12 and 13: [ iii ]Table of ContentsConverting
- Page 14 and 15: Table of ContentsSo what is the pro
- Page 16 and 17: [ vii ]Table of ContentsChapter 10:
- Page 18 and 19: Table of ContentsPretrained models
- Page 20 and 21: PrefaceDeep Learning with TensorFlo
- Page 22 and 23: • Supervised learning, in which t
- Page 24 and 25: PrefaceThe complexity of deep learn
- Page 26 and 27: PrefaceFigure 5: Adoption of deep l
- Page 28 and 29: Chapter 1, Neural Network Foundatio
- Page 30 and 31: PrefaceChapter 13, TensorFlow for M
- Page 32 and 33: ConventionsThere are a number of te
- Page 36 and 37: Neural Network Foundationswith Tens
- Page 38 and 39: Chapter 1Figure 2: Google Trends fo
- Page 40 and 41: Chapter 1Here's how the code is wri
- Page 42 and 43: Note that a hyperplane is a subspac
- Page 44 and 45: Chapter 1In Figure 4 each node in t
- Page 46 and 47: Chapter 1Figure 7: Tanh activation
- Page 48 and 49: Both the functions allow small upda
- Page 50 and 51: Chapter 1For example, the digit 3 c
- Page 52 and 53: Chapter 1model.add(keras.layers.Den
- Page 54 and 55: Chapter 1Stochastic Gradient Descen
- Page 56 and 57: [ 21 ]Chapter 1Improving the simple
- Page 58 and 59: Chapter 1[ 0., 1., 0.],[ 0., 0., 1.
- Page 60 and 61: Chapter 1Y_train = tf.keras.utils.t
- Page 62 and 63: Chapter 1Mathematically this direct
- Page 64 and 65: Chapter 1Figure 20: Increasing the
- Page 66 and 67: Chapter 1Figure 23: An example of a
- Page 68 and 69: Chapter 1Figure 25: Accuracy for di
- Page 70 and 71: Chapter 1Figure 29: Test accuracy f
- Page 72 and 73: Chapter 1However, this might not be
- Page 74 and 75: Chapter 1During training, weights i
- Page 76 and 77: Chapter 1Figure 33: Selecting the d
- Page 78 and 79: Chapter 1import tensorflow as tffro
- Page 80 and 81: Chapter 1As shown in the following
- Page 82 and 83: Chapter 1Figure 39: Backward step i
- Page 84:
Chapter 1References1. F. Rosenblatt
- Page 87 and 88:
TensorFlow 1.x and 2.xComputational
- Page 89 and 90:
TensorFlow 1.x and 2.xThis evaluate
- Page 91 and 92:
TensorFlow 1.x and 2.x• We can br
- Page 93 and 94:
TensorFlow 1.x and 2.xHere, we used
- Page 95 and 96:
TensorFlow 1.x and 2.xYou should se
- Page 97 and 98:
TensorFlow 1.x and 2.x>>> simple_fu
- Page 99 and 100:
TensorFlow 1.x and 2.xFigure 2: An
- Page 101 and 102:
TensorFlow 1.x and 2.xLet's have a
- Page 103 and 104:
TensorFlow 1.x and 2.x• tf.keras.
- Page 105 and 106:
TensorFlow 1.x and 2.xIn addition,
- Page 107 and 108:
TensorFlow 1.x and 2.xDataset uses
- Page 109 and 110:
TensorFlow 1.x and 2.xHowever, the
- Page 111 and 112:
TensorFlow 1.x and 2.xDistributed t
- Page 113 and 114:
TensorFlow 1.x and 2.xNote that eac
- Page 115 and 116:
TensorFlow 1.x and 2.xA complete de
- Page 117 and 118:
TensorFlow 1.x and 2.x• TensorBoa
- Page 119 and 120:
TensorFlow 1.x and 2.xIn this secti
- Page 122 and 123:
RegressionRegression is one of the
- Page 124 and 125:
Chapter 3Simple linear regressionIf
- Page 126 and 127:
Chapter 3Where YY̅ and AA̅ are th
- Page 128 and 129:
Chapter 3Multiple linear regression
- Page 130 and 131:
Chapter 3• crossed_column: When w
- Page 132 and 133:
Chapter 3Predicting house price usi
- Page 134 and 135:
Chapter 36. Next we instantiate a L
- Page 136 and 137:
Chapter 3The graph shows the flow o
- Page 138 and 139:
Chapter 3Logistic regression is use
- Page 140 and 141:
Chapter 34. Use the feature_column
- Page 142 and 143:
Chapter 3One can also use TensorBoa
- Page 144 and 145:
ConvolutionalNeural NetworksIn the
- Page 146 and 147:
In fact, it is possible to slide th
- Page 148 and 149:
Chapter 4Pooling layersLet's suppos
- Page 150 and 151:
Chapter 4Where pool_size=(2, 2) is
- Page 152 and 153:
Chapter 4X_train, X_test = X_train
- Page 154 and 155:
Chapter 4Figure 7: Execution of the
- Page 156 and 157:
Chapter 4Figure 10: Accuracy for di
- Page 158 and 159:
Chapter 4Figure 12: An example of C
- Page 160 and 161:
Chapter 4Let's run the code. Our ne
- Page 162 and 163:
Chapter 4Congratulations! You have
- Page 164 and 165:
Chapter 4Figure 15: An example of i
- Page 166 and 167:
Since we saved the model and the we
- Page 168 and 169:
Chapter 4model.add(layers.Convoluti
- Page 170 and 171:
Chapter 4Utilizing tf.keras built-i
- Page 172 and 173:
Chapter 4img_path = 'cat.jpg'img =
- Page 174 and 175:
Advanced ConvolutionalNeural Networ
- Page 176 and 177:
Chapter 5Here αα is a hyperparame
- Page 178 and 179:
Chapter 5A more practical approach
- Page 180 and 181:
The output of this network is combi
- Page 182 and 183:
Chapter 5Figure 10: An example of i
- Page 184 and 185:
Chapter 5predictions = tf.keras.lay
- Page 186 and 187:
Chapter 5And we can evaluate the tr
- Page 188 and 189:
All the convolutional levels are pr
- Page 190 and 191:
Chapter 5IMG_SIZE = 160 # All image
- Page 192 and 193:
Chapter 5Let's compute the number o
- Page 194 and 195:
Chapter 5grace_hopper = np.array(gr
- Page 196 and 197:
Chapter 5Figure 22: An example of a
- Page 198 and 199:
[ 163 ]Chapter 5
- Page 200 and 201:
Chapter 5Then the two feature vecto
- Page 202 and 203:
Chapter 5llLLcccccccccccccc(pp,xx)=
- Page 204 and 205:
Chapter 5Google released the DeepDr
- Page 206 and 207:
Chapter 5for act in layer_activatio
- Page 208 and 209:
Chapter 5In this image, an Inceptio
- Page 210 and 211:
Chapter 5You might also notice that
- Page 212 and 213:
Chapter 5The model has more than 2,
- Page 214 and 215:
Chapter 5DeepMind is a company owne
- Page 216 and 217:
Chapter 5Figure 30: WaveNet stack,
- Page 218 and 219:
Chapter 5MuseNet can produce up to
- Page 220 and 221:
Chapter 5Depthwise convolutionLet's
- Page 222 and 223:
Chapter 5A third difference is that
- Page 224:
Chapter 59. Aaron van den Oord, San
- Page 227 and 228:
Generative Adversarial NetworksThe
- Page 229 and 230:
Generative Adversarial NetworksWe w
- Page 231 and 232:
Generative Adversarial Networks# Tr
- Page 233 and 234:
Generative Adversarial Networks# Cr
- Page 235 and 236:
Generative Adversarial NetworksDCGA
- Page 237 and 238:
Generative Adversarial NetworksThe
- Page 239 and 240:
Generative Adversarial NetworksThe
- Page 241 and 242:
Generative Adversarial Networks# If
- Page 243 and 244:
Generative Adversarial NetworksFoll
- Page 245 and 246:
Generative Adversarial NetworksThe
- Page 247 and 248:
Generative Adversarial NetworksFoll
- Page 249 and 250:
Generative Adversarial NetworksCool
- Page 251 and 252:
Generative Adversarial NetworksOne
- Page 253 and 254:
Generative Adversarial NetworksCycl
- Page 255 and 256:
Generative Adversarial NetworksTo c
- Page 257 and 258:
Generative Adversarial NetworksLet
- Page 259 and 260:
Generative Adversarial NetworksNow
- Page 261 and 262:
Generative Adversarial Networkschec
- Page 263 and 264:
Generative Adversarial NetworksWe s
- Page 266 and 267:
Word EmbeddingsIn the last few chap
- Page 268 and 269:
Chapter 7Word embeddings are based
- Page 270 and 271:
Chapter 7Word2VecThe models known a
- Page 272 and 273:
Chapter 7([Earth, The], 1)([Earth,
- Page 274 and 275:
Chapter 7GloVe vectors trained on v
- Page 276 and 277:
Chapter 7if i >= k-1:breakif k < le
- Page 278 and 279:
Chapter 7We can also compute the di
- Page 280 and 281:
Chapter 7labels.append(1 if label =
- Page 282 and 283:
Building the embedding matrixThe ge
- Page 284 and 285:
Chapter 7Output of the convolutiona
- Page 286 and 287:
Chapter 7class_weight=CLASS_WEIGHTS
- Page 288 and 289:
Chapter 7The idea of neural embeddi
- Page 290 and 291:
continue# compute non-zero elements
- Page 292 and 293:
0 3422 3455 118 4527 2304 772 3659
- Page 294 and 295:
Chapter 71971 5443 0.000 0.3481971
- Page 296 and 297:
The earliest dynamic embedding was
- Page 298 and 299:
Chapter 7A more convenient source o
- Page 300 and 301:
Chapter 7Figure 4: Different stages
- Page 302 and 303:
As with the OpenAI paper, BERT prop
- Page 304 and 305:
Chapter 7Fine-tuning BERTBecause BE
- Page 306 and 307:
Chapter 7$ python run_classifier.py
- Page 308 and 309:
Chapter 7We then instantiate a toke
- Page 310 and 311:
Chapter 7The usage pattern of insta
- Page 312 and 313:
18. Bojanowski, P., et al. (2017, 1
- Page 314 and 315:
Recurrent Neural NetworksIn chapter
- Page 316 and 317:
Chapter 8Just as in a traditional n
- Page 318 and 319:
Chapter 8However, an understanding
- Page 320 and 321:
As we backpropagate across multiple
- Page 322 and 323:
Chapter 8Here i, f, and o are the i
- Page 324 and 325:
Chapter 8Notice that the only diffe
- Page 326 and 327:
Chapter 8RNN topologiesWe have seen
- Page 328 and 329:
Chapter 8That is, if the input is t
- Page 330 and 331:
Chapter 8sequences = sequences.map(
- Page 332 and 333:
[ 297 ]Chapter 8The logic follows a
- Page 334 and 335:
Chapter 8Alice, make there some tha
- Page 336 and 337:
Chapter 8Our objective is to train
- Page 338 and 339:
Chapter 8Next we define our model.
- Page 340 and 341:
accuracy: 0.9962 - val_loss: 0.7770
- Page 342 and 343:
0 1 there is so much good food in v
- Page 344 and 345:
Chapter 8for idx, line in enumerate
- Page 346 and 347:
poss_as_catints.append(tf.keras.uti
- Page 348 and 349:
Chapter 8Perhaps the best approach
- Page 350 and 351:
Chapter 819/19 [===================
- Page 352 and 353:
Chapter 8This tensor is fed into an
- Page 354 and 355:
Chapter 8The first sequence starts
- Page 356 and 357:
Chapter 8batch_size, drop_remainder
- Page 358 and 359:
Chapter 8The output of the encoder
- Page 360 and 361:
Chapter 8decoder_in = tf.expand_dim
- Page 362 and 363:
Chapter 8Epoch-#LossBLEU ScoreEngli
- Page 364 and 365:
Chapter 8The Attention context sign
- Page 366 and 367:
self.encoder_dim = encoder_dimself.
- Page 368 and 369:
Chapter 8In order to verify that th
- Page 370 and 371:
Chapter 8loss += loss_fn(decoder_ou
- Page 372 and 373:
3. The target sequences are fed int
- Page 374 and 375:
Chapter 8Since the entire sequence
- Page 376 and 377:
Chapter 84. Hadjeres, G., Pachet, F
- Page 378:
Chapter 834. Honnibal, M. (2016). E
- Page 381 and 382:
AutoencodersYou might think that im
- Page 383 and 384:
AutoencodersThis results in produci
- Page 385 and 386:
Autoencodersself.encoder = Encoder(
- Page 387 and 388:
Autoencodersepoch_loss = 0for step,
- Page 389 and 390:
AutoencodersYou can see that the co
- Page 391 and 392:
AutoencodersWe can see that in the
- Page 393 and 394:
Autoencodersx_train_noisy = x_train
- Page 395 and 396:
AutoencodersAn impressive reconstru
- Page 397 and 398:
Autoencoders4. Let us now define ou
- Page 399 and 400:
Autoencoders8. You can see the loss
- Page 401 and 402:
Autoencodersimport collectionsimpor
- Page 403 and 404:
AutoencodersThe embedding is genera
- Page 405 and 406:
Autoencodersyield Xbatch, XbatchXba
- Page 407 and 408:
AutoencodersFirst, we extract the e
- Page 409 and 410:
AutoencodersReferences1. Rumelhart,
- Page 411 and 412:
Unsupervised LearningPCA reduces th
- Page 413 and 414:
Unsupervised LearningA comparison o
- Page 415 and 416:
Unsupervised Learning• Inspector
- Page 417 and 418:
Unsupervised LearningFigure 3: Rand
- Page 419 and 420:
Unsupervised LearningK-means cluste
- Page 421 and 422:
Unsupervised LearningTo decide whic
- Page 423 and 424:
Unsupervised Learningself._neighbou
- Page 425 and 426:
Unsupervised LearningThen there are
- Page 427 and 428:
Unsupervised Learningplt.text(m[1],
- Page 429 and 430:
Unsupervised LearningWe define a cl
- Page 431 and 432:
Unsupervised Learning#Find the erro
- Page 433 and 434:
Unsupervised LearningLet us try sta
- Page 435 and 436:
Unsupervised LearningNow that we ha
- Page 437 and 438:
Unsupervised LearningWe define the
- Page 439 and 440:
Unsupervised Learning# Backprop and
- Page 441 and 442:
Unsupervised Learning16. Sculley, D
- Page 443 and 444:
Reinforcement LearningSo, the first
- Page 445 and 446:
Reinforcement Learning• Reward R(
- Page 447 and 448:
Reinforcement Learning• Consider,
- Page 449 and 450:
Reinforcement LearningA modified ve
- Page 451 and 452:
Reinforcement LearningToday there e
- Page 453 and 454:
Reinforcement Learningenv_name = 'B
- Page 455 and 456:
Reinforcement LearningBy default, i
- Page 457 and 458:
Reinforcement LearningIn the next s
- Page 459 and 460:
Reinforcement LearningThe DQN that
- Page 461 and 462:
Reinforcement LearningLet us now in
- Page 463 and 464:
Reinforcement LearningAnother impor
- Page 465 and 466:
Reinforcement Learningself.replay(s
- Page 467 and 468:
Reinforcement LearningIn the follow
- Page 469 and 470:
Reinforcement LearningRainbowRainbo
- Page 471 and 472:
Reinforcement LearningSummaryReinfo
- Page 474 and 475:
TensorFlow and CloudAI algorithms r
- Page 476 and 477:
Chapter 12Figure 1: The Microsoft A
- Page 478 and 479:
Chapter 12You can learn about all t
- Page 480 and 481:
Chapter 12Figure 3: The console of
- Page 482 and 483:
Chapter 12Having covered GCP, let's
- Page 484 and 485:
Chapter 12After clicking Launch Ins
- Page 486 and 487:
Chapter 12• Nvidia Tesla K80• N
- Page 488 and 489:
Chapter 12When you log in to Colabo
- Page 490 and 491:
Chapter 12Microsoft Azure Notebooks
- Page 492 and 493:
Chapter 12In a TFX pipeline, a unit
- Page 494 and 495:
Chapter 12TFX uses the open source
- Page 496 and 497:
TensorFlow for Mobile andIoT and Te
- Page 498 and 499:
Chapter 13Figure 1: Trade-offs for
- Page 500 and 501:
Chapter 13Figure 2: TensorFlow Lite
- Page 502 and 503:
Chapter 13Then you need to install
- Page 504 and 505:
Chapter 13In this section, we will
- Page 506 and 507:
[ 471 ]Chapter 13We will discuss Au
- Page 508 and 509:
Chapter 13Figure 9: An example of Q
- Page 510 and 511:
Chapter 13Traditional machine learn
- Page 512 and 513:
Chapter 13keras_model = …keras_mo
- Page 514 and 515:
The steps here are similar to a nor
- Page 516 and 517:
Chapter 13}model.add(tf.layers.dens
- Page 518 and 519:
}Chapter 13const container = {name:
- Page 520 and 521:
Chapter 13We have seen how to use T
- Page 522 and 523:
Chapter 13AudioTextGeneralUtilities
- Page 524 and 525:
Chapter 13In this section, we have
- Page 526 and 527:
An introduction to AutoMLThe goal o
- Page 528 and 529:
Automatic data preparationThe first
- Page 530 and 531:
Chapter 14On the CIFAR-10 dataset,
- Page 532 and 533:
AutoKerasAutoKeras [6] provides fun
- Page 534 and 535:
Chapter 14Using Cloud AutoML ‒ Ta
- Page 536 and 537:
Chapter 14Figure 8: AutoML Tables:
- Page 538 and 539:
Chapter 14The ANALYZE tab (see Figu
- Page 540 and 541:
Chapter 14This price includes the u
- Page 542 and 543:
Chapter 14Figure 15: AutoML Tables:
- Page 544 and 545:
curl -X POST -H "Content-Type: appl
- Page 546 and 547:
Chapter 14Figure 21: AutoML Table:
- Page 548 and 549:
We believe that AI might advance me
- Page 550 and 551:
Chapter 14The dataset is hosted on
- Page 552 and 553:
Chapter 14The first thing is to cre
- Page 554 and 555:
Chapter 14Figure 35: AutoML Vision
- Page 556 and 557:
Chapter 14There are two options: ei
- Page 558 and 559:
Chapter 14Figure 43: AutoML Vision
- Page 560 and 561:
Chapter 14Figure 47: AutoML Text Cl
- Page 562 and 563:
Chapter 14Figure 51: AutoML Text Cl
- Page 564 and 565:
Chapter 14Figure 55: AutoML Text Tr
- Page 566 and 567:
Chapter 14Figure 59: AutoML Text Tr
- Page 568 and 569:
Chapter 14Using Cloud AutoML ‒ Vi
- Page 570 and 571:
We can now start to build a model.
- Page 572 and 573:
Chapter 14Figure 71: AutoML Video I
- Page 574 and 575:
Chapter 14Figure 74: AutoML Video I
- Page 576 and 577:
Chapter 14The final step consists s
- Page 578 and 579:
The Math BehindDeep LearningIn this
- Page 580 and 581:
Chapter 15If the function is not li
- Page 582 and 583:
Chapter 15Chain ruleThe chain rule
- Page 584 and 585:
Chapter 15The derivative can be com
- Page 586 and 587:
Remember that a neural network can
- Page 588 and 589:
Chapter 15Let's see in detail how t
- Page 590 and 591:
Chapter 15For a function in multipl
- Page 592 and 593:
Chapter 15The gradient of the error
- Page 594 and 595:
Chapter 15= ww jjjj δδ′ jj (zz
- Page 596 and 597:
Chapter 153. Backpropagate the erro
- Page 598 and 599:
Chapter 15Combining the results, we
- Page 600 and 601:
Chapter 15Thinking about backpropag
- Page 602 and 603:
Chapter 15Figure 17: RNN equations
- Page 604 and 605:
Chapter 15The error is computed via
- Page 606 and 607:
Tensor Processing UnitThis chapter
- Page 608 and 609:
Chapter 16It was clear that neither
- Page 610 and 611:
Chapter 16Figure 2: TPU v1 design s
- Page 612 and 613:
Chapter 16TPU2 has MMU for matrix m
- Page 614 and 615:
Chapter 16Figure 7: Linear scalabil
- Page 616 and 617:
Chapter 16Loading data with tf.data
- Page 618 and 619:
[ 583 ]Chapter 16The execution is s
- Page 620 and 621:
Chapter 16The best way to play with
- Page 622 and 623:
Chapter 16Using TensorFlow 2.1 and
- Page 624:
References1. Moore's law https://en
- Page 627 and 628:
Other Books You May Enjoy●●●
- Page 629 and 630:
Other Books You May EnjoyDancing wi
- Page 632 and 633:
IndexAAccelerated Linear Algebra (X
- Page 634 and 635:
DenseNets 160HighwaysNets 160residu
- Page 636 and 637:
toy text 417epochscount, increasing
- Page 638 and 639:
k-means clusteringabout 380in Tenso
- Page 640 and 641:
optimizersreference link 17, 27test
- Page 642 and 643:
about 354, 355reference link 355SRG
- Page 644 and 645:
textual documents 174, 175tfjs-mode