- Page 2 and 3:
Deep Learning withTensorFlow 2 and
- Page 4 and 5:
packt.comSubscribe to our online di
- Page 6 and 7:
I want to thank my kids, Aurora, Le
- Page 8 and 9:
Sujit Pal is a Technology Research
- Page 10 and 11:
Table of ContentsPrefacexiChapter 1
- Page 12 and 13:
[ iii ]Table of ContentsConverting
- Page 14 and 15:
Table of ContentsSo what is the pro
- Page 16 and 17:
[ vii ]Table of ContentsChapter 10:
- Page 18 and 19:
Table of ContentsPretrained models
- Page 20 and 21:
PrefaceDeep Learning with TensorFlo
- Page 22 and 23:
• Supervised learning, in which t
- Page 24 and 25:
PrefaceThe complexity of deep learn
- Page 26 and 27:
PrefaceFigure 5: Adoption of deep l
- Page 28 and 29:
Chapter 1, Neural Network Foundatio
- Page 30 and 31:
PrefaceChapter 13, TensorFlow for M
- Page 32 and 33:
ConventionsThere are a number of te
- Page 34:
PrefaceReferences1. Deep Learning w
- Page 37 and 38:
Neural Network Foundations with Ten
- Page 39 and 40:
Neural Network Foundations with Ten
- Page 41 and 42:
Neural Network Foundations with Ten
- Page 43 and 44:
Neural Network Foundations with Ten
- Page 45 and 46:
Neural Network Foundations with Ten
- Page 47 and 48:
Neural Network Foundations with Ten
- Page 49 and 50:
Neural Network Foundations with Ten
- Page 51 and 52:
Neural Network Foundations with Ten
- Page 53 and 54:
Neural Network Foundations with Ten
- Page 55 and 56:
Neural Network Foundations with Ten
- Page 57 and 58:
Neural Network Foundations with Ten
- Page 59 and 60:
Neural Network Foundations with Ten
- Page 61 and 62:
Neural Network Foundations with Ten
- Page 63 and 64:
Neural Network Foundations with Ten
- Page 65 and 66:
Neural Network Foundations with Ten
- Page 67 and 68:
Neural Network Foundations with Ten
- Page 69 and 70:
Neural Network Foundations with Ten
- Page 71 and 72:
Neural Network Foundations with Ten
- Page 73 and 74:
Neural Network Foundations with Ten
- Page 75 and 76:
Neural Network Foundations with Ten
- Page 77 and 78:
Neural Network Foundations with Ten
- Page 79 and 80:
Neural Network Foundations with Ten
- Page 81 and 82:
Neural Network Foundations with Ten
- Page 83 and 84:
Neural Network Foundations with Ten
- Page 86 and 87:
TensorFlow 1.x and 2.xThe intent of
- Page 88 and 89:
An example to start withWe'll consi
- Page 90 and 91:
Chapter 23. Placeholders: Placehold
- Page 92 and 93:
• To create random values from a
- Page 94 and 95:
To know the value, we need to creat
- Page 96 and 97:
Chapter 2Both PyTorch and TensorFlo
- Page 98 and 99:
Chapter 2state = [tf.zeros([100, 10
- Page 100 and 101:
Chapter 2For now, there's no need t
- Page 102 and 103:
Chapter 2Let's see an example of a
- Page 104 and 105:
Chapter 2If you want to save a mode
- Page 106 and 107:
Chapter 2supervised=True)train_data
- Page 108 and 109:
Chapter 2There, tf.feature_column.n
- Page 110 and 111:
Chapter 2print (dz_dx)print (dy_dx)
- Page 112 and 113:
Chapter 2In our toy example we use
- Page 114 and 115:
Chapter 2For multi-machine training
- Page 116 and 117:
Chapter 25. Use tf.layers modules t
- Page 118 and 119:
Chapter 2Keras or tf.keras?Another
- Page 120:
• tf.data can be used to load mod
- Page 123 and 124:
RegressionLet us imagine a simpler
- Page 125 and 126:
RegressionTake a look at the last t
- Page 127 and 128:
Regression3. Now, we calculate the
- Page 129 and 130:
RegressionIn the next section we wi
- Page 131 and 132:
Regression2. Now, we define the fea
- Page 133 and 134:
Regression2. Download the dataset:(
- Page 135 and 136:
RegressionThe following is the Tens
- Page 137 and 138:
RegressionIn regression the aim is
- Page 139 and 140:
RegressionThe Estimator outputs the
- Page 141 and 142:
RegressionThe following is the grap
- Page 143 and 144:
RegressionReferencesHere are some g
- Page 145 and 146:
Convolutional Neural NetworksIn thi
- Page 147 and 148:
Convolutional Neural NetworksIn thi
- Page 149 and 150:
Convolutional Neural NetworksIn oth
- Page 151 and 152:
Convolutional Neural NetworksThen w
- Page 153 and 154:
Convolutional Neural NetworksHoweve
- Page 155 and 156:
Convolutional Neural NetworksPlotti
- Page 157 and 158:
Convolutional Neural NetworksIn gen
- Page 159 and 160:
Convolutional Neural NetworksOur ne
- Page 161 and 162:
Convolutional Neural NetworksThese
- Page 163 and 164:
Convolutional Neural NetworksSo, we
- Page 165 and 166:
Convolutional Neural NetworksEach i
- Page 167 and 168:
Convolutional Neural NetworksVery d
- Page 169 and 170:
Convolutional Neural NetworksRecogn
- Page 171 and 172:
Convolutional Neural NetworksIf we
- Page 173 and 174: Convolutional Neural NetworksRefere
- Page 175 and 176: Advanced Convolutional Neural Netwo
- Page 177 and 178: Advanced Convolutional Neural Netwo
- Page 179 and 180: Advanced Convolutional Neural Netwo
- Page 181 and 182: Advanced Convolutional Neural Netwo
- Page 183 and 184: Advanced Convolutional Neural Netwo
- Page 185 and 186: Advanced Convolutional Neural Netwo
- Page 187 and 188: Advanced Convolutional Neural Netwo
- Page 189 and 190: Advanced Convolutional Neural Netwo
- Page 191 and 192: Advanced Convolutional Neural Netwo
- Page 193 and 194: Advanced Convolutional Neural Netwo
- Page 195 and 196: Advanced Convolutional Neural Netwo
- Page 197 and 198: Advanced Convolutional Neural Netwo
- Page 199 and 200: Advanced Convolutional Neural Netwo
- Page 201 and 202: Advanced Convolutional Neural Netwo
- Page 203 and 204: Advanced Convolutional Neural Netwo
- Page 205 and 206: Advanced Convolutional Neural Netwo
- Page 207 and 208: Advanced Convolutional Neural Netwo
- Page 209 and 210: Advanced Convolutional Neural Netwo
- Page 211 and 212: Advanced Convolutional Neural Netwo
- Page 213 and 214: Advanced Convolutional Neural Netwo
- Page 215 and 216: Advanced Convolutional Neural Netwo
- Page 217 and 218: Advanced Convolutional Neural Netwo
- Page 219 and 220: Advanced Convolutional Neural Netwo
- Page 221 and 222: Advanced Convolutional Neural Netwo
- Page 223: Advanced Convolutional Neural Netwo
- Page 227 and 228: Generative Adversarial NetworksThe
- Page 229 and 230: Generative Adversarial NetworksWe w
- Page 231 and 232: Generative Adversarial Networks# Tr
- Page 233 and 234: Generative Adversarial Networks# Cr
- Page 235 and 236: Generative Adversarial NetworksDCGA
- Page 237 and 238: Generative Adversarial NetworksThe
- Page 239 and 240: Generative Adversarial NetworksThe
- Page 241 and 242: Generative Adversarial Networks# If
- Page 243 and 244: Generative Adversarial NetworksFoll
- Page 245 and 246: Generative Adversarial NetworksThe
- Page 247 and 248: Generative Adversarial NetworksFoll
- Page 249 and 250: Generative Adversarial NetworksCool
- Page 251 and 252: Generative Adversarial NetworksOne
- Page 253 and 254: Generative Adversarial NetworksCycl
- Page 255 and 256: Generative Adversarial NetworksTo c
- Page 257 and 258: Generative Adversarial NetworksLet
- Page 259 and 260: Generative Adversarial NetworksNow
- Page 261 and 262: Generative Adversarial Networkschec
- Page 263 and 264: Generative Adversarial NetworksWe s
- Page 266 and 267: Word EmbeddingsIn the last few chap
- Page 268 and 269: Chapter 7Word embeddings are based
- Page 270 and 271: Chapter 7Word2VecThe models known a
- Page 272 and 273: Chapter 7([Earth, The], 1)([Earth,
- Page 274 and 275:
Chapter 7GloVe vectors trained on v
- Page 276 and 277:
Chapter 7if i >= k-1:breakif k < le
- Page 278 and 279:
Chapter 7We can also compute the di
- Page 280 and 281:
Chapter 7labels.append(1 if label =
- Page 282 and 283:
Building the embedding matrixThe ge
- Page 284 and 285:
Chapter 7Output of the convolutiona
- Page 286 and 287:
Chapter 7class_weight=CLASS_WEIGHTS
- Page 288 and 289:
Chapter 7The idea of neural embeddi
- Page 290 and 291:
continue# compute non-zero elements
- Page 292 and 293:
0 3422 3455 118 4527 2304 772 3659
- Page 294 and 295:
Chapter 71971 5443 0.000 0.3481971
- Page 296 and 297:
The earliest dynamic embedding was
- Page 298 and 299:
Chapter 7A more convenient source o
- Page 300 and 301:
Chapter 7Figure 4: Different stages
- Page 302 and 303:
As with the OpenAI paper, BERT prop
- Page 304 and 305:
Chapter 7Fine-tuning BERTBecause BE
- Page 306 and 307:
Chapter 7$ python run_classifier.py
- Page 308 and 309:
Chapter 7We then instantiate a toke
- Page 310 and 311:
Chapter 7The usage pattern of insta
- Page 312 and 313:
18. Bojanowski, P., et al. (2017, 1
- Page 314 and 315:
Recurrent Neural NetworksIn chapter
- Page 316 and 317:
Chapter 8Just as in a traditional n
- Page 318 and 319:
Chapter 8However, an understanding
- Page 320 and 321:
As we backpropagate across multiple
- Page 322 and 323:
Chapter 8Here i, f, and o are the i
- Page 324 and 325:
Chapter 8Notice that the only diffe
- Page 326 and 327:
Chapter 8RNN topologiesWe have seen
- Page 328 and 329:
Chapter 8That is, if the input is t
- Page 330 and 331:
Chapter 8sequences = sequences.map(
- Page 332 and 333:
[ 297 ]Chapter 8The logic follows a
- Page 334 and 335:
Chapter 8Alice, make there some tha
- Page 336 and 337:
Chapter 8Our objective is to train
- Page 338 and 339:
Chapter 8Next we define our model.
- Page 340 and 341:
accuracy: 0.9962 - val_loss: 0.7770
- Page 342 and 343:
0 1 there is so much good food in v
- Page 344 and 345:
Chapter 8for idx, line in enumerate
- Page 346 and 347:
poss_as_catints.append(tf.keras.uti
- Page 348 and 349:
Chapter 8Perhaps the best approach
- Page 350 and 351:
Chapter 819/19 [===================
- Page 352 and 353:
Chapter 8This tensor is fed into an
- Page 354 and 355:
Chapter 8The first sequence starts
- Page 356 and 357:
Chapter 8batch_size, drop_remainder
- Page 358 and 359:
Chapter 8The output of the encoder
- Page 360 and 361:
Chapter 8decoder_in = tf.expand_dim
- Page 362 and 363:
Chapter 8Epoch-#LossBLEU ScoreEngli
- Page 364 and 365:
Chapter 8The Attention context sign
- Page 366 and 367:
self.encoder_dim = encoder_dimself.
- Page 368 and 369:
Chapter 8In order to verify that th
- Page 370 and 371:
Chapter 8loss += loss_fn(decoder_ou
- Page 372 and 373:
3. The target sequences are fed int
- Page 374 and 375:
Chapter 8Since the entire sequence
- Page 376 and 377:
Chapter 84. Hadjeres, G., Pachet, F
- Page 378:
Chapter 834. Honnibal, M. (2016). E
- Page 381 and 382:
AutoencodersYou might think that im
- Page 383 and 384:
AutoencodersThis results in produci
- Page 385 and 386:
Autoencodersself.encoder = Encoder(
- Page 387 and 388:
Autoencodersepoch_loss = 0for step,
- Page 389 and 390:
AutoencodersYou can see that the co
- Page 391 and 392:
AutoencodersWe can see that in the
- Page 393 and 394:
Autoencodersx_train_noisy = x_train
- Page 395 and 396:
AutoencodersAn impressive reconstru
- Page 397 and 398:
Autoencoders4. Let us now define ou
- Page 399 and 400:
Autoencoders8. You can see the loss
- Page 401 and 402:
Autoencodersimport collectionsimpor
- Page 403 and 404:
AutoencodersThe embedding is genera
- Page 405 and 406:
Autoencodersyield Xbatch, XbatchXba
- Page 407 and 408:
AutoencodersFirst, we extract the e
- Page 409 and 410:
AutoencodersReferences1. Rumelhart,
- Page 411 and 412:
Unsupervised LearningPCA reduces th
- Page 413 and 414:
Unsupervised LearningA comparison o
- Page 415 and 416:
Unsupervised Learning• Inspector
- Page 417 and 418:
Unsupervised LearningFigure 3: Rand
- Page 419 and 420:
Unsupervised LearningK-means cluste
- Page 421 and 422:
Unsupervised LearningTo decide whic
- Page 423 and 424:
Unsupervised Learningself._neighbou
- Page 425 and 426:
Unsupervised LearningThen there are
- Page 427 and 428:
Unsupervised Learningplt.text(m[1],
- Page 429 and 430:
Unsupervised LearningWe define a cl
- Page 431 and 432:
Unsupervised Learning#Find the erro
- Page 433 and 434:
Unsupervised LearningLet us try sta
- Page 435 and 436:
Unsupervised LearningNow that we ha
- Page 437 and 438:
Unsupervised LearningWe define the
- Page 439 and 440:
Unsupervised Learning# Backprop and
- Page 441 and 442:
Unsupervised Learning16. Sculley, D
- Page 443 and 444:
Reinforcement LearningSo, the first
- Page 445 and 446:
Reinforcement Learning• Reward R(
- Page 447 and 448:
Reinforcement Learning• Consider,
- Page 449 and 450:
Reinforcement LearningA modified ve
- Page 451 and 452:
Reinforcement LearningToday there e
- Page 453 and 454:
Reinforcement Learningenv_name = 'B
- Page 455 and 456:
Reinforcement LearningBy default, i
- Page 457 and 458:
Reinforcement LearningIn the next s
- Page 459 and 460:
Reinforcement LearningThe DQN that
- Page 461 and 462:
Reinforcement LearningLet us now in
- Page 463 and 464:
Reinforcement LearningAnother impor
- Page 465 and 466:
Reinforcement Learningself.replay(s
- Page 467 and 468:
Reinforcement LearningIn the follow
- Page 469 and 470:
Reinforcement LearningRainbowRainbo
- Page 471 and 472:
Reinforcement LearningSummaryReinfo
- Page 474 and 475:
TensorFlow and CloudAI algorithms r
- Page 476 and 477:
Chapter 12Figure 1: The Microsoft A
- Page 478 and 479:
Chapter 12You can learn about all t
- Page 480 and 481:
Chapter 12Figure 3: The console of
- Page 482 and 483:
Chapter 12Having covered GCP, let's
- Page 484 and 485:
Chapter 12After clicking Launch Ins
- Page 486 and 487:
Chapter 12• Nvidia Tesla K80• N
- Page 488 and 489:
Chapter 12When you log in to Colabo
- Page 490 and 491:
Chapter 12Microsoft Azure Notebooks
- Page 492 and 493:
Chapter 12In a TFX pipeline, a unit
- Page 494 and 495:
Chapter 12TFX uses the open source
- Page 496 and 497:
TensorFlow for Mobile andIoT and Te
- Page 498 and 499:
Chapter 13Figure 1: Trade-offs for
- Page 500 and 501:
Chapter 13Figure 2: TensorFlow Lite
- Page 502 and 503:
Chapter 13Then you need to install
- Page 504 and 505:
Chapter 13In this section, we will
- Page 506 and 507:
[ 471 ]Chapter 13We will discuss Au
- Page 508 and 509:
Chapter 13Figure 9: An example of Q
- Page 510 and 511:
Chapter 13Traditional machine learn
- Page 512 and 513:
Chapter 13keras_model = …keras_mo
- Page 514 and 515:
The steps here are similar to a nor
- Page 516 and 517:
Chapter 13}model.add(tf.layers.dens
- Page 518 and 519:
}Chapter 13const container = {name:
- Page 520 and 521:
Chapter 13We have seen how to use T
- Page 522 and 523:
Chapter 13AudioTextGeneralUtilities
- Page 524 and 525:
Chapter 13In this section, we have
- Page 526 and 527:
An introduction to AutoMLThe goal o
- Page 528 and 529:
Automatic data preparationThe first
- Page 530 and 531:
Chapter 14On the CIFAR-10 dataset,
- Page 532 and 533:
AutoKerasAutoKeras [6] provides fun
- Page 534 and 535:
Chapter 14Using Cloud AutoML ‒ Ta
- Page 536 and 537:
Chapter 14Figure 8: AutoML Tables:
- Page 538 and 539:
Chapter 14The ANALYZE tab (see Figu
- Page 540 and 541:
Chapter 14This price includes the u
- Page 542 and 543:
Chapter 14Figure 15: AutoML Tables:
- Page 544 and 545:
curl -X POST -H "Content-Type: appl
- Page 546 and 547:
Chapter 14Figure 21: AutoML Table:
- Page 548 and 549:
We believe that AI might advance me
- Page 550 and 551:
Chapter 14The dataset is hosted on
- Page 552 and 553:
Chapter 14The first thing is to cre
- Page 554 and 555:
Chapter 14Figure 35: AutoML Vision
- Page 556 and 557:
Chapter 14There are two options: ei
- Page 558 and 559:
Chapter 14Figure 43: AutoML Vision
- Page 560 and 561:
Chapter 14Figure 47: AutoML Text Cl
- Page 562 and 563:
Chapter 14Figure 51: AutoML Text Cl
- Page 564 and 565:
Chapter 14Figure 55: AutoML Text Tr
- Page 566 and 567:
Chapter 14Figure 59: AutoML Text Tr
- Page 568 and 569:
Chapter 14Using Cloud AutoML ‒ Vi
- Page 570 and 571:
We can now start to build a model.
- Page 572 and 573:
Chapter 14Figure 71: AutoML Video I
- Page 574 and 575:
Chapter 14Figure 74: AutoML Video I
- Page 576 and 577:
Chapter 14The final step consists s
- Page 578 and 579:
The Math BehindDeep LearningIn this
- Page 580 and 581:
Chapter 15If the function is not li
- Page 582 and 583:
Chapter 15Chain ruleThe chain rule
- Page 584 and 585:
Chapter 15The derivative can be com
- Page 586 and 587:
Remember that a neural network can
- Page 588 and 589:
Chapter 15Let's see in detail how t
- Page 590 and 591:
Chapter 15For a function in multipl
- Page 592 and 593:
Chapter 15The gradient of the error
- Page 594 and 595:
Chapter 15= ww jjjj δδ′ jj (zz
- Page 596 and 597:
Chapter 153. Backpropagate the erro
- Page 598 and 599:
Chapter 15Combining the results, we
- Page 600 and 601:
Chapter 15Thinking about backpropag
- Page 602 and 603:
Chapter 15Figure 17: RNN equations
- Page 604 and 605:
Chapter 15The error is computed via
- Page 606 and 607:
Tensor Processing UnitThis chapter
- Page 608 and 609:
Chapter 16It was clear that neither
- Page 610 and 611:
Chapter 16Figure 2: TPU v1 design s
- Page 612 and 613:
Chapter 16TPU2 has MMU for matrix m
- Page 614 and 615:
Chapter 16Figure 7: Linear scalabil
- Page 616 and 617:
Chapter 16Loading data with tf.data
- Page 618 and 619:
[ 583 ]Chapter 16The execution is s
- Page 620 and 621:
Chapter 16The best way to play with
- Page 622 and 623:
Chapter 16Using TensorFlow 2.1 and
- Page 624:
References1. Moore's law https://en
- Page 627 and 628:
Other Books You May Enjoy●●●
- Page 629 and 630:
Other Books You May EnjoyDancing wi
- Page 632 and 633:
IndexAAccelerated Linear Algebra (X
- Page 634 and 635:
DenseNets 160HighwaysNets 160residu
- Page 636 and 637:
toy text 417epochscount, increasing
- Page 638 and 639:
k-means clusteringabout 380in Tenso
- Page 640 and 641:
optimizersreference link 17, 27test
- Page 642 and 643:
about 354, 355reference link 355SRG
- Page 644 and 645:
textual documents 174, 175tfjs-mode