09.05.2023 Views

pdfcoffee

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Autoencoders

We can see that in the standard autoencoder (a) many hidden units have very large

weights (brighter), suggesting that they are overworked, while all the hidden units

of the Sparse autoencoder (b) learn the input representation almost equally, and we

see a more even color distribution:

Figure 4: Encoder weight matrix for (a) Standard Autoencoder and (b) Sparse Autoencoder

Denoising autoencoders

The two autoencoders that we have covered in the previous sections are examples

of undercomplete autoencoders, because the hidden layer in them has lower

dimensionality as compared to the input (output) layer. Denoising autoencoders

belong to the class of overcomplete autoencoders, because they work better when

the dimensions of the hidden layer are more than the input layer.

A denoising autoencoder learns from a corrupted (noisy) input; it feed its encoder

network the noisy input, and then the reconstructed image from the decoder is

compared with the original input. The idea is that this will help the network learn

how to denoise an input. It will no longer just make pixel-wise comparisons, but in

order to denoise it will learn the information of neighboring pixels as well.

A Denoising autoencoder has two main differences from other autoencoders: first, n_

hidden, the number of hidden units in the bottleneck layer is greater than the number

of units in the input layer, m, that is, n_hidden > m. Second, the input to the encoder

is corrupted input. To do this we add a noise term in both test and training images:

[ 356 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!