pdfcoffee
Chapter 5grace_hopper = np.array(grace_hopper)/255.0result = classifier.predict(grace_hopper[np.newaxis, ...])predicted_class = np.argmax(result[0], axis=-1)print (predicted_class)Pretty simple indeed. Just remember to use hub.KerasLayer() for wrapping anyHub layer. In this section, we have discussed how to use TensorFlow Hub. Next,we will focus on other CNN architectures.Other CNN architecturesIn this section we will discuss many other different CNN architectures includingAlexNet, residual networks, HighwayNets, DenseNets, and Xception.AlexNetOne of the first convolutional networks was AlexNet [4], which consisted of onlyeight layers; the first five were convolutional ones with max-pooling layers, and thelast three were fully connected. AlexNet [4] is an article cited more than 35,000 times,which started the deep learning revolution (for computer vision). Then, networksstarted to become deeper and deeper. Recently, a new idea has been proposed.Residual networksResidual networks (ResNets) are based on the interesting idea of allowing earlierlayers to be fed directly into deeper layers. These are the so-called skip connections(or fast-forward connections). The key idea is to minimize the risk of vanishing orexploding gradients for deep networks (see Chapter 9, Autoencoders). The buildingblock of a ResNet is called "residual block" or "identity block," which includes bothforward and fast-forward connections.In this example (Figure 20) the output of an earlier layer is added to the output ofa later layer before being sent into a ReLU activation function:Figure 20: An example of Image Segmentation[ 159 ]
Advanced Convolutional Neural NetworksHighwayNets and DenseNetsAn additional weight matrix may be used to learn the skip weights and these modelsare frequently denoted as HighwayNets. Instead, models with several parallelskips are known as DenseNets [5]. It has been noted that the human brain mighthave similar patterns to residual networks since the cortical layer VI neurons getinput from layer I, skipping intermediary layers. In addition, residual networkscan be faster to train since there are fewer layers to propagate through duringeach iteration (deeper layers get input sooner due to the skip connection). Thefollowing is an example of DenseNets (Figure 21, as shown in http://arxiv.org/abs/1608.06993):Figure 21: An example of DenseNetsXceptionXception networks use two basic blocks: a depthwise convolution and a pointwiseconvolution. A depthwise convolution is the channel-wise n × n spatial convolution.Suppose an image has three channels, then we have three convolutions of n × n.A pointwise convolution is a 1×1 convolution. In Xception – an "extreme" versionof an Inception module – we first use a 1×1 convolution to map cross-channelcorrelations, and then separately map the spatial correlations of every outputchannel as shown in Figure 22 (from https://arxiv.org/pdf/1610.02357.pdf):[ 160 ]
- Page 143 and 144: RegressionReferencesHere are some g
- Page 145 and 146: Convolutional Neural NetworksIn thi
- Page 147 and 148: Convolutional Neural NetworksIn thi
- Page 149 and 150: Convolutional Neural NetworksIn oth
- Page 151 and 152: Convolutional Neural NetworksThen w
- Page 153 and 154: Convolutional Neural NetworksHoweve
- Page 155 and 156: Convolutional Neural NetworksPlotti
- Page 157 and 158: Convolutional Neural NetworksIn gen
- Page 159 and 160: Convolutional Neural NetworksOur ne
- Page 161 and 162: Convolutional Neural NetworksThese
- Page 163 and 164: Convolutional Neural NetworksSo, we
- Page 165 and 166: Convolutional Neural NetworksEach i
- Page 167 and 168: Convolutional Neural NetworksVery d
- Page 169 and 170: Convolutional Neural NetworksRecogn
- Page 171 and 172: Convolutional Neural NetworksIf we
- Page 173 and 174: Convolutional Neural NetworksRefere
- Page 175 and 176: Advanced Convolutional Neural Netwo
- Page 177 and 178: Advanced Convolutional Neural Netwo
- Page 179 and 180: Advanced Convolutional Neural Netwo
- Page 181 and 182: Advanced Convolutional Neural Netwo
- Page 183 and 184: Advanced Convolutional Neural Netwo
- Page 185 and 186: Advanced Convolutional Neural Netwo
- Page 187 and 188: Advanced Convolutional Neural Netwo
- Page 189 and 190: Advanced Convolutional Neural Netwo
- Page 191 and 192: Advanced Convolutional Neural Netwo
- Page 193: Advanced Convolutional Neural Netwo
- Page 197 and 198: Advanced Convolutional Neural Netwo
- Page 199 and 200: Advanced Convolutional Neural Netwo
- Page 201 and 202: Advanced Convolutional Neural Netwo
- Page 203 and 204: Advanced Convolutional Neural Netwo
- Page 205 and 206: Advanced Convolutional Neural Netwo
- Page 207 and 208: Advanced Convolutional Neural Netwo
- Page 209 and 210: Advanced Convolutional Neural Netwo
- Page 211 and 212: Advanced Convolutional Neural Netwo
- Page 213 and 214: Advanced Convolutional Neural Netwo
- Page 215 and 216: Advanced Convolutional Neural Netwo
- Page 217 and 218: Advanced Convolutional Neural Netwo
- Page 219 and 220: Advanced Convolutional Neural Netwo
- Page 221 and 222: Advanced Convolutional Neural Netwo
- Page 223 and 224: Advanced Convolutional Neural Netwo
- Page 226 and 227: GenerativeAdversarial NetworksIn th
- Page 228 and 229: [ 193 ]Chapter 6Eventually, we reac
- Page 230 and 231: [ 195 ]Chapter 6Next, we combine th
- Page 232 and 233: Chapter 6And handwritten digits gen
- Page 234 and 235: Chapter 6Figure 1: Visualizing the
- Page 236 and 237: Chapter 6The resultant generator mo
- Page 238 and 239: Chapter 6Figure 4: A summary of res
- Page 240 and 241: Chapter 6def train(self, epochs, ba
- Page 242 and 243: Chapter 6The preceding images were
Advanced Convolutional Neural Networks
HighwayNets and DenseNets
An additional weight matrix may be used to learn the skip weights and these models
are frequently denoted as HighwayNets. Instead, models with several parallel
skips are known as DenseNets [5]. It has been noted that the human brain might
have similar patterns to residual networks since the cortical layer VI neurons get
input from layer I, skipping intermediary layers. In addition, residual networks
can be faster to train since there are fewer layers to propagate through during
each iteration (deeper layers get input sooner due to the skip connection). The
following is an example of DenseNets (Figure 21, as shown in http://arxiv.org/
abs/1608.06993):
Figure 21: An example of DenseNets
Xception
Xception networks use two basic blocks: a depthwise convolution and a pointwise
convolution. A depthwise convolution is the channel-wise n × n spatial convolution.
Suppose an image has three channels, then we have three convolutions of n × n.
A pointwise convolution is a 1×1 convolution. In Xception – an "extreme" version
of an Inception module – we first use a 1×1 convolution to map cross-channel
correlations, and then separately map the spatial correlations of every output
channel as shown in Figure 22 (from https://arxiv.org/pdf/1610.02357.pdf):
[ 160 ]