pdfcoffee
Chapter 5for act in layer_activations:# In gradient ascent, we'll want to maximize this value# so our image increasingly "excites" the layerloss = tf.math.reduce_mean(act)# Normalize by the number of units in the layerloss /= np.prod(act.shape)total_loss += lossreturn total_lossNow let's run the gradient ascent:img = tf.Variable(img)steps = 400for step in range(steps):with tf.GradientTape() as tape:activations = forward(img)loss = calc_loss(activations)gradients = tape.gradient(loss, img)# Normalize the gradientsgradients /= gradients.numpy().std() + 1e-8# Update our image by directly adding the gradientsimg.assign_add(gradients)if step % 50 == 0:clear_output()print ("Step %d, loss %f" % (step, loss))show(deprocess(img.numpy()))plt.show()# Let's see the resultclear_output()show(deprocess(img.numpy()))[ 171 ]
Advanced Convolutional Neural NetworksThis transforms the image on the left into the psychedelic image on the right:Figure 26: Applying the Inception transformation (right) to a normal image (left)Inspecting what a network has learnedA particularly interesting research effort is being devoted to understanding whatneural networks are actually learning in order to be able to recognize images so well.This is called neural network "interpretability." Activation Atlases is a promisingrecent result that aims to show the feature visualizations of averaged activationfunctions. In this way, activation atlases produce a global map seen through theeyes of the network. Let's look at a demo available at https://distill.pub/2019/activation-atlas/:Figure 27: A screenshot showing an example of an Activation Atlas[ 172 ]
- Page 155 and 156: Convolutional Neural NetworksPlotti
- Page 157 and 158: Convolutional Neural NetworksIn gen
- Page 159 and 160: Convolutional Neural NetworksOur ne
- Page 161 and 162: Convolutional Neural NetworksThese
- Page 163 and 164: Convolutional Neural NetworksSo, we
- Page 165 and 166: Convolutional Neural NetworksEach i
- Page 167 and 168: Convolutional Neural NetworksVery d
- Page 169 and 170: Convolutional Neural NetworksRecogn
- Page 171 and 172: Convolutional Neural NetworksIf we
- Page 173 and 174: Convolutional Neural NetworksRefere
- Page 175 and 176: Advanced Convolutional Neural Netwo
- Page 177 and 178: Advanced Convolutional Neural Netwo
- Page 179 and 180: Advanced Convolutional Neural Netwo
- Page 181 and 182: Advanced Convolutional Neural Netwo
- Page 183 and 184: Advanced Convolutional Neural Netwo
- Page 185 and 186: Advanced Convolutional Neural Netwo
- Page 187 and 188: Advanced Convolutional Neural Netwo
- Page 189 and 190: Advanced Convolutional Neural Netwo
- Page 191 and 192: Advanced Convolutional Neural Netwo
- Page 193 and 194: Advanced Convolutional Neural Netwo
- Page 195 and 196: Advanced Convolutional Neural Netwo
- Page 197 and 198: Advanced Convolutional Neural Netwo
- Page 199 and 200: Advanced Convolutional Neural Netwo
- Page 201 and 202: Advanced Convolutional Neural Netwo
- Page 203 and 204: Advanced Convolutional Neural Netwo
- Page 205: Advanced Convolutional Neural Netwo
- Page 209 and 210: Advanced Convolutional Neural Netwo
- Page 211 and 212: Advanced Convolutional Neural Netwo
- Page 213 and 214: Advanced Convolutional Neural Netwo
- Page 215 and 216: Advanced Convolutional Neural Netwo
- Page 217 and 218: Advanced Convolutional Neural Netwo
- Page 219 and 220: Advanced Convolutional Neural Netwo
- Page 221 and 222: Advanced Convolutional Neural Netwo
- Page 223 and 224: Advanced Convolutional Neural Netwo
- Page 226 and 227: GenerativeAdversarial NetworksIn th
- Page 228 and 229: [ 193 ]Chapter 6Eventually, we reac
- Page 230 and 231: [ 195 ]Chapter 6Next, we combine th
- Page 232 and 233: Chapter 6And handwritten digits gen
- Page 234 and 235: Chapter 6Figure 1: Visualizing the
- Page 236 and 237: Chapter 6The resultant generator mo
- Page 238 and 239: Chapter 6Figure 4: A summary of res
- Page 240 and 241: Chapter 6def train(self, epochs, ba
- Page 242 and 243: Chapter 6The preceding images were
- Page 244 and 245: Chapter 6Another interesting paper
- Page 246 and 247: Chapter 6To elaborate, let us say t
- Page 248 and 249: Chapter 6Figure 7: The architecture
- Page 250 and 251: Chapter 6Figure 11: Illegible initi
- Page 252 and 253: Chapter 6Bedrooms: Generated bedroo
- Page 254 and 255: Chapter 6The images need to be norm
Advanced Convolutional Neural Networks
This transforms the image on the left into the psychedelic image on the right:
Figure 26: Applying the Inception transformation (right) to a normal image (left)
Inspecting what a network has learned
A particularly interesting research effort is being devoted to understanding what
neural networks are actually learning in order to be able to recognize images so well.
This is called neural network "interpretability." Activation Atlases is a promising
recent result that aims to show the feature visualizations of averaged activation
functions. In this way, activation atlases produce a global map seen through the
eyes of the network. Let's look at a demo available at https://distill.pub/2019/
activation-atlas/:
Figure 27: A screenshot showing an example of an Activation Atlas
[ 172 ]