pdfcoffee
Chapter 2Keras or tf.keras?Another legitimate question is whether you should use Keras with TensorFlow as abackend or, instead, use the APIs in tf.keras directly available in TensorFlow. Notethat there is not a 1:1 correspondence between Keras and tf.keras. Many endpointsin tf.keras are not implemented in Keras and tf.Keras does not support multiplebackends as Keras. So, Keras or tf.keras? My suggestion is the second optionrather than the first one. tf.keras has multiple advantages over Keras, consistingof TensorFlow enhancements discussed in this chapter (eager execution; nativesupport for distributed training, including training on TPUs; and support for theTensorFlow SavedModel exchange format). However, the first option is still themost relevant one if you plan to write highly portable code that can run on multiplebackends, including Google TensorFlow, Microsoft CNTK, Amazon MXnet, andTheano. Note that Keras is an independent open source project, and its developmentis not dependent on TensorFlow. Therefore, Keras is going to be developed for theforeseeable future. Note that Keras 2.3.0 (released on September 17, 2019) is thefirst release of multi-backend Keras, which supports TensorFlow 2.0. It maintainscompatibility with TensorFlow 1.14 and 1.13, as well as Theano and CNTK.Let's conclude the chapter with a new comparison: the primary machine learningsoftware tools used by the top-5 teams on Kaggle in each competition. This was asurvey ran by Francois Chollet on Twitter at the beginning of April 2019 (thanks,Francois for agreeing to have it included in this book!):Figure 5: Primary ML software tools used by top-5 teams on Kaggle in 2019[ 83 ]
TensorFlow 1.x and 2.xIn this section, we have seen the main differences between Keras and tf.keras.SummaryTensorFlow 2.0 is a rich development ecosystem composed of two main parts:Training and Serving. Training consists of a set of libraries for dealing with datasets(tf.data), a set of libraries for building models, including high-level libraries (tf.Keras and Estimators), low-level libraries (tf.*), and a collection of pretrainedmodels (tf.Hub), which will be discussed in Chapter 5, Advanced Convolutional NeuralNetworks. Training can happen on CPUs, GPUs, and TPUs via distribution strategiesand the result can be saved using the appropriate libraries. Serving can happenon multiple platforms, including on-prem, cloud, Android, iOS, Raspberry Pi, anybrowser supporting JavaScript, and Node.js. Many language bindings are supported,including Python, C, C#, Java, Swift, R, and others. The following diagramsummarizes the architecture of TensorFlow 2.0 as discussed in this chapter:Figure 6: Summary of TensorFlow 2.0 architecture[ 84 ]
- Page 67 and 68: Neural Network Foundations with Ten
- Page 69 and 70: Neural Network Foundations with Ten
- Page 71 and 72: Neural Network Foundations with Ten
- Page 73 and 74: Neural Network Foundations with Ten
- Page 75 and 76: Neural Network Foundations with Ten
- Page 77 and 78: Neural Network Foundations with Ten
- Page 79 and 80: Neural Network Foundations with Ten
- Page 81 and 82: Neural Network Foundations with Ten
- Page 83 and 84: Neural Network Foundations with Ten
- Page 86 and 87: TensorFlow 1.x and 2.xThe intent of
- Page 88 and 89: An example to start withWe'll consi
- Page 90 and 91: Chapter 23. Placeholders: Placehold
- Page 92 and 93: • To create random values from a
- Page 94 and 95: To know the value, we need to creat
- Page 96 and 97: Chapter 2Both PyTorch and TensorFlo
- Page 98 and 99: Chapter 2state = [tf.zeros([100, 10
- Page 100 and 101: Chapter 2For now, there's no need t
- Page 102 and 103: Chapter 2Let's see an example of a
- Page 104 and 105: Chapter 2If you want to save a mode
- Page 106 and 107: Chapter 2supervised=True)train_data
- Page 108 and 109: Chapter 2There, tf.feature_column.n
- Page 110 and 111: Chapter 2print (dz_dx)print (dy_dx)
- Page 112 and 113: Chapter 2In our toy example we use
- Page 114 and 115: Chapter 2For multi-machine training
- Page 116 and 117: Chapter 25. Use tf.layers modules t
- Page 120: • tf.data can be used to load mod
- Page 123 and 124: RegressionLet us imagine a simpler
- Page 125 and 126: RegressionTake a look at the last t
- Page 127 and 128: Regression3. Now, we calculate the
- Page 129 and 130: RegressionIn the next section we wi
- Page 131 and 132: Regression2. Now, we define the fea
- Page 133 and 134: Regression2. Download the dataset:(
- Page 135 and 136: RegressionThe following is the Tens
- Page 137 and 138: RegressionIn regression the aim is
- Page 139 and 140: RegressionThe Estimator outputs the
- Page 141 and 142: RegressionThe following is the grap
- Page 143 and 144: RegressionReferencesHere are some g
- Page 145 and 146: Convolutional Neural NetworksIn thi
- Page 147 and 148: Convolutional Neural NetworksIn thi
- Page 149 and 150: Convolutional Neural NetworksIn oth
- Page 151 and 152: Convolutional Neural NetworksThen w
- Page 153 and 154: Convolutional Neural NetworksHoweve
- Page 155 and 156: Convolutional Neural NetworksPlotti
- Page 157 and 158: Convolutional Neural NetworksIn gen
- Page 159 and 160: Convolutional Neural NetworksOur ne
- Page 161 and 162: Convolutional Neural NetworksThese
- Page 163 and 164: Convolutional Neural NetworksSo, we
- Page 165 and 166: Convolutional Neural NetworksEach i
- Page 167 and 168: Convolutional Neural NetworksVery d
Chapter 2
Keras or tf.keras?
Another legitimate question is whether you should use Keras with TensorFlow as a
backend or, instead, use the APIs in tf.keras directly available in TensorFlow. Note
that there is not a 1:1 correspondence between Keras and tf.keras. Many endpoints
in tf.keras are not implemented in Keras and tf.Keras does not support multiple
backends as Keras. So, Keras or tf.keras? My suggestion is the second option
rather than the first one. tf.keras has multiple advantages over Keras, consisting
of TensorFlow enhancements discussed in this chapter (eager execution; native
support for distributed training, including training on TPUs; and support for the
TensorFlow SavedModel exchange format). However, the first option is still the
most relevant one if you plan to write highly portable code that can run on multiple
backends, including Google TensorFlow, Microsoft CNTK, Amazon MXnet, and
Theano. Note that Keras is an independent open source project, and its development
is not dependent on TensorFlow. Therefore, Keras is going to be developed for the
foreseeable future. Note that Keras 2.3.0 (released on September 17, 2019) is the
first release of multi-backend Keras, which supports TensorFlow 2.0. It maintains
compatibility with TensorFlow 1.14 and 1.13, as well as Theano and CNTK.
Let's conclude the chapter with a new comparison: the primary machine learning
software tools used by the top-5 teams on Kaggle in each competition. This was a
survey ran by Francois Chollet on Twitter at the beginning of April 2019 (thanks,
Francois for agreeing to have it included in this book!):
Figure 5: Primary ML software tools used by top-5 teams on Kaggle in 2019
[ 83 ]