pdfcoffee
[ 583 ]Chapter 16The execution is super-fast on a TPU as every single iteration takes about 2 seconds:Epoch 1/10INFO:tensorflow:New input shapes; (re-)compiling: mode=train (# ofcores 8), [TensorSpec(shape=(1024,), dtype=tf.int32, name=None),TensorSpec(shape=(1024, 28, 28, 1), dtype=tf.float32, name=None),TensorSpec(shape=(1024, 10), dtype=tf.float32, name=None)]INFO:tensorflow:Overriding default placeholder.INFO:tensorflow:Remapping placeholder for inputInstructions for updating:Use tf.cast instead.INFO:tensorflow:Started compilingINFO:tensorflow:Finished compiling. Time elapsed: 2.567350149154663 secsINFO:tensorflow:Setting weights on TPU model.60/60 [==============================] - 8s 126ms/step - loss: 0.9622 -acc: 0.6921Epoch 2/1060/60 [==============================] - 2s 41ms/step - loss: 0.2406 -acc: 0.9292Epoch 3/1060/60 [==============================] - 3s 42ms/step - loss: 0.1412 -acc: 0.9594Epoch 4/1060/60 [==============================] - 3s 42ms/step - loss: 0.1048 -acc: 0.9701Epoch 5/1060/60 [==============================] - 3s 42ms/step - loss: 0.0852 -acc: 0.9756Epoch 6/1060/60 [==============================] - 3s 42ms/step - loss: 0.0706 -acc: 0.9798Epoch 7/1060/60 [==============================] - 3s 42ms/step - loss: 0.0608 -acc: 0.9825Epoch 8/1060/60 [==============================] - 3s 42ms/step - loss: 0.0530 -acc: 0.9846Epoch 9/1060/60 [==============================] - 3s 42ms/step - loss: 0.0474 -acc: 0.9863
Tensor Processing UnitEpoch 10/1060/60 [==============================] - 3s 42ms/step - loss: 0.0418 -acc: 0.9876<tensorflow.python.keras.callbacks.History at 0x7fbb3819bc50>As you can see, running a simple MNIST model on TPUs is extremely fast. Eachiteration is around 3 seconds even if we have a CNN with 3 convolutions followedby two dense stages.Using pretrained TPU modelsGoogle offers a collection of models pretrained with TPUs available on GitHubTensorFlow/tpu repo (https://github.com/tensorflow/tpu). Models includeimage recognition, object detection, low-resource models, machine translation andlanguage models, speech recognition, and image generation. Whenever it is possible,my suggestion is to start with a pretrained model [6], and then fine tune it or applysome form of transfer learning. As of September 2019, the following models areavailable:Image Recognition,Segmentation, and moreMachine Translationand Language ModelsSpeechRecognitionImageGenerationImage Recognition• AmoebaNet-DMachine Translation(transformer based)ASRTransformerImageTransformer• ResNet-50/101/152/2000• Inception v2/v3/v4Sentiment AnalysisDCGANObject Detection• RetinaNet• Mask R-CNN(transformer based)Question AnswerGANImage Segmentation• Mask R-CNN• DeepLabBert• RetinaNetLow-Resource Models• MnasNet• MobileNet• SqueezeNetTable 1: State-of-the-art collection of models pretrained with TPUs available on GitHub[ 584 ]
- Page 567 and 568: An introduction to AutoMLOnce the m
- Page 569 and 570: An introduction to AutoMLFigure 65:
- Page 571 and 572: An introduction to AutoMLOnce the m
- Page 573 and 574: An introduction to AutoMLWe can als
- Page 575 and 576: An introduction to AutoMLThe most e
- Page 577 and 578: An introduction to AutoMLReferences
- Page 579 and 580: The Math Behind Deep LearningSome m
- Page 581 and 582: The Math Behind Deep LearningSuppos
- Page 583 and 584: The Math Behind Deep LearningNote t
- Page 585 and 586: The Math Behind Deep LearningTheref
- Page 587 and 588: The Math Behind Deep LearningThe ea
- Page 589 and 590: The Math Behind Deep LearningThe re
- Page 591 and 592: The Math Behind Deep LearningCase 2
- Page 593 and 594: The Math Behind Deep LearningIn thi
- Page 595 and 596: The Math Behind Deep LearningHere,
- Page 597 and 598: The Math Behind Deep Learning(Note
- Page 599 and 600: The Math Behind Deep LearningIn man
- Page 601 and 602: The Math Behind Deep LearningIf we
- Page 603 and 604: The Math Behind Deep LearningChapte
- Page 605 and 606: The Math Behind Deep LearningThis c
- Page 607 and 608: Tensor Processing UnitMany people b
- Page 609 and 610: Tensor Processing UnitThe sequentia
- Page 611 and 612: Tensor Processing UnitIf you want t
- Page 613 and 614: Tensor Processing UnitOn the other
- Page 615 and 616: Tensor Processing UnitHow to use TP
- Page 617: Tensor Processing UnitNote that ful
- Page 621 and 622: Tensor Processing UnitFigure 11: Go
- Page 623 and 624: Tensor Processing UnitThen the usag
- Page 626 and 627: Other Books YouMay EnjoyIf you enjo
- Page 628 and 629: Other Books You May EnjoyAI Crash C
- Page 630: Other Books You May EnjoyLeave a re
- Page 633 and 634: AutoML pipelinedata preparation 493
- Page 635 and 636: Deep Deterministic Policy Gradient(
- Page 637 and 638: Google cloud consolereference link
- Page 639 and 640: used, for building GAN 193-198MNIST
- Page 641 and 642: regularizersreference link 38reinfo
- Page 643 and 644: TensorFlow Lite 81TensorFlow Core r
- Page 645: Xxception networks 160, 162YYOLO ne
[ 583 ]
Chapter 16
The execution is super-fast on a TPU as every single iteration takes about 2 seconds:
Epoch 1/10
INFO:tensorflow:New input shapes; (re-)compiling: mode=train (# of
cores 8), [TensorSpec(shape=(1024,), dtype=tf.int32, name=None),
TensorSpec(shape=(1024, 28, 28, 1), dtype=tf.float32, name=None),
TensorSpec(shape=(1024, 10), dtype=tf.float32, name=None)]
INFO:tensorflow:Overriding default placeholder.
INFO:tensorflow:Remapping placeholder for input
Instructions for updating:
Use tf.cast instead.
INFO:tensorflow:Started compiling
INFO:tensorflow:Finished compiling. Time elapsed: 2.567350149154663 secs
INFO:tensorflow:Setting weights on TPU model.
60/60 [==============================] - 8s 126ms/step - loss: 0.9622 -
acc: 0.6921
Epoch 2/10
60/60 [==============================] - 2s 41ms/step - loss: 0.2406 -
acc: 0.9292
Epoch 3/10
60/60 [==============================] - 3s 42ms/step - loss: 0.1412 -
acc: 0.9594
Epoch 4/10
60/60 [==============================] - 3s 42ms/step - loss: 0.1048 -
acc: 0.9701
Epoch 5/10
60/60 [==============================] - 3s 42ms/step - loss: 0.0852 -
acc: 0.9756
Epoch 6/10
60/60 [==============================] - 3s 42ms/step - loss: 0.0706 -
acc: 0.9798
Epoch 7/10
60/60 [==============================] - 3s 42ms/step - loss: 0.0608 -
acc: 0.9825
Epoch 8/10
60/60 [==============================] - 3s 42ms/step - loss: 0.0530 -
acc: 0.9846
Epoch 9/10
60/60 [==============================] - 3s 42ms/step - loss: 0.0474 -
acc: 0.9863