pdfcoffee

soumyasankar99
from soumyasankar99 More from this publisher
09.05.2023 Views

Chapter 5for act in layer_activations:# In gradient ascent, we'll want to maximize this value# so our image increasingly "excites" the layerloss = tf.math.reduce_mean(act)# Normalize by the number of units in the layerloss /= np.prod(act.shape)total_loss += lossreturn total_lossNow let's run the gradient ascent:img = tf.Variable(img)steps = 400for step in range(steps):with tf.GradientTape() as tape:activations = forward(img)loss = calc_loss(activations)gradients = tape.gradient(loss, img)# Normalize the gradientsgradients /= gradients.numpy().std() + 1e-8# Update our image by directly adding the gradientsimg.assign_add(gradients)if step % 50 == 0:clear_output()print ("Step %d, loss %f" % (step, loss))show(deprocess(img.numpy()))plt.show()# Let's see the resultclear_output()show(deprocess(img.numpy()))[ 171 ]

Advanced Convolutional Neural NetworksThis transforms the image on the left into the psychedelic image on the right:Figure 26: Applying the Inception transformation (right) to a normal image (left)Inspecting what a network has learnedA particularly interesting research effort is being devoted to understanding whatneural networks are actually learning in order to be able to recognize images so well.This is called neural network "interpretability." Activation Atlases is a promisingrecent result that aims to show the feature visualizations of averaged activationfunctions. In this way, activation atlases produce a global map seen through theeyes of the network. Let's look at a demo available at https://distill.pub/2019/activation-atlas/:Figure 27: A screenshot showing an example of an Activation Atlas[ 172 ]

Advanced Convolutional Neural Networks

This transforms the image on the left into the psychedelic image on the right:

Figure 26: Applying the Inception transformation (right) to a normal image (left)

Inspecting what a network has learned

A particularly interesting research effort is being devoted to understanding what

neural networks are actually learning in order to be able to recognize images so well.

This is called neural network "interpretability." Activation Atlases is a promising

recent result that aims to show the feature visualizations of averaged activation

functions. In this way, activation atlases produce a global map seen through the

eyes of the network. Let's look at a demo available at https://distill.pub/2019/

activation-atlas/:

Figure 27: A screenshot showing an example of an Activation Atlas

[ 172 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!