09.05.2023 Views

pdfcoffee

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 4

img_path = 'cat.jpg'

img = image.load_img(img_path, target_size=(224, 224))

x = image.img_to_array(img)

x = np.expand_dims(x, axis=0)

x = preprocess_input(x)

# get the features from this block

features = model.predict(x)

print(features)

You might wonder why we want to extract the features from an intermediate layer

in a DCNN. The reasoning is that as the network learns to classify images into

categories, each layer learns to identify the features that are necessary to perform

the final classification. Lower layers identify lower-order features such as color

and edges, and higher layers compose these lower-order features into higher-order

features such as shapes or objects. Hence, the intermediate layer has the capability

to extract important features from an image, and these features are more likely to

help in different kinds of classification.

This has multiple advantages. First, we can rely on publicly available large-scale

training and transfer this learning to novel domains. Second, we can save time for

expensive large training. Third, we can provide reasonable solutions even when we

don't have a large number of training examples for our domain. We also get a good

starting network shape for the task at hand, instead of guessing it.

With this, we will conclude the overview of VGG-16 CNNs, the last deep learning

model defined in this chapter. You will see more examples of CNNs in the next

chapter.

Summary

In this chapter we have learned how to use deep learning convnets for recognizing

MNIST handwritten characters with high accuracy. We used the CIFAR-10 dataset

for building a deep learning classifier with 10 categories, and the ImageNet dataset

to build an accurate classifier with 1,000 categories. In addition, we investigated how

to use large deep learning networks such as VGG16 and very deep networks such

as InceptionV3. We concluded with a discussion on transfer learning; in the next

chapter we'll see how to adapt prebuilt models trained on large datasets so that they

can work well on a new domain.

[ 137 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!