pdfcoffee

soumyasankar99
from soumyasankar99 More from this publisher
09.05.2023 Views

Chapter 5grace_hopper = np.array(grace_hopper)/255.0result = classifier.predict(grace_hopper[np.newaxis, ...])predicted_class = np.argmax(result[0], axis=-1)print (predicted_class)Pretty simple indeed. Just remember to use hub.KerasLayer() for wrapping anyHub layer. In this section, we have discussed how to use TensorFlow Hub. Next,we will focus on other CNN architectures.Other CNN architecturesIn this section we will discuss many other different CNN architectures includingAlexNet, residual networks, HighwayNets, DenseNets, and Xception.AlexNetOne of the first convolutional networks was AlexNet [4], which consisted of onlyeight layers; the first five were convolutional ones with max-pooling layers, and thelast three were fully connected. AlexNet [4] is an article cited more than 35,000 times,which started the deep learning revolution (for computer vision). Then, networksstarted to become deeper and deeper. Recently, a new idea has been proposed.Residual networksResidual networks (ResNets) are based on the interesting idea of allowing earlierlayers to be fed directly into deeper layers. These are the so-called skip connections(or fast-forward connections). The key idea is to minimize the risk of vanishing orexploding gradients for deep networks (see Chapter 9, Autoencoders). The buildingblock of a ResNet is called "residual block" or "identity block," which includes bothforward and fast-forward connections.In this example (Figure 20) the output of an earlier layer is added to the output ofa later layer before being sent into a ReLU activation function:Figure 20: An example of Image Segmentation[ 159 ]

Advanced Convolutional Neural NetworksHighwayNets and DenseNetsAn additional weight matrix may be used to learn the skip weights and these modelsare frequently denoted as HighwayNets. Instead, models with several parallelskips are known as DenseNets [5]. It has been noted that the human brain mighthave similar patterns to residual networks since the cortical layer VI neurons getinput from layer I, skipping intermediary layers. In addition, residual networkscan be faster to train since there are fewer layers to propagate through duringeach iteration (deeper layers get input sooner due to the skip connection). Thefollowing is an example of DenseNets (Figure 21, as shown in http://arxiv.org/abs/1608.06993):Figure 21: An example of DenseNetsXceptionXception networks use two basic blocks: a depthwise convolution and a pointwiseconvolution. A depthwise convolution is the channel-wise n × n spatial convolution.Suppose an image has three channels, then we have three convolutions of n × n.A pointwise convolution is a 1×1 convolution. In Xception – an "extreme" versionof an Inception module – we first use a 1×1 convolution to map cross-channelcorrelations, and then separately map the spatial correlations of every outputchannel as shown in Figure 22 (from https://arxiv.org/pdf/1610.02357.pdf):[ 160 ]

Chapter 5

grace_hopper = np.array(grace_hopper)/255.0

result = classifier.predict(grace_hopper[np.newaxis, ...])

predicted_class = np.argmax(result[0], axis=-1)

print (predicted_class)

Pretty simple indeed. Just remember to use hub.KerasLayer() for wrapping any

Hub layer. In this section, we have discussed how to use TensorFlow Hub. Next,

we will focus on other CNN architectures.

Other CNN architectures

In this section we will discuss many other different CNN architectures including

AlexNet, residual networks, HighwayNets, DenseNets, and Xception.

AlexNet

One of the first convolutional networks was AlexNet [4], which consisted of only

eight layers; the first five were convolutional ones with max-pooling layers, and the

last three were fully connected. AlexNet [4] is an article cited more than 35,000 times,

which started the deep learning revolution (for computer vision). Then, networks

started to become deeper and deeper. Recently, a new idea has been proposed.

Residual networks

Residual networks (ResNets) are based on the interesting idea of allowing earlier

layers to be fed directly into deeper layers. These are the so-called skip connections

(or fast-forward connections). The key idea is to minimize the risk of vanishing or

exploding gradients for deep networks (see Chapter 9, Autoencoders). The building

block of a ResNet is called "residual block" or "identity block," which includes both

forward and fast-forward connections.

In this example (Figure 20) the output of an earlier layer is added to the output of

a later layer before being sent into a ReLU activation function:

Figure 20: An example of Image Segmentation

[ 159 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!