09.05.2023 Views

pdfcoffee

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 10

ρρ(vv oo |h oo ) = σσ(VV TT WW + cc)

Backward pass: The hidden unit representation (h 0

) is then passed back to the visible

units through the same weights, W, but different bias, c, where they reconstruct the

input. Again, the input is sampled:

ρρ(vv ii |h oo ) = σσ(VV TT h oo + cc)

These two passes are repeated for k steps or until the convergence [4] is reached.

According to researchers, k=1 gives good results, so we will keep k = 1.

The joint configuration of the visible vector V and the hidden vector has an energy

given as follows:

EE(vv, h) = −bb TT VV − cc TT h − VV TT WWh

Also associated with each visible vector V is free energy, the energy that a single

configuration would need to have in order to have the same probability as all of the

configurations that contain V:

FF(vv) = −bb TT VV − ∑ log⁡(1 + exp⁡(cc jj + VV TT WW))

jj⁡∈⁡hiiiiiiiiii

Using the Contrastive Divergence objective function, that is, Mean(F(Voriginal))-

Mean(F(Vreconstructed)), the change in weights is given by:

dddd = ηη[(VV TT h) iiiiiiiiii − (VV TT h) rrrrrrrrrrrrrrrrrrrrrrrrrr ]

Here, ηη is the learning rate. Similar expressions exist for the biases b and c.

Reconstructing images using RBM

Let us build an RBM in TensorFlow 2.0. The RBM will be designed to reconstruct

handwritten digits like the Autoencoders did in Chapter 9, Autoencoders. We import

TensorFlow, NumPy, and Matplotlib libraries:

import tensorflow as tf

import numpy as np

import matplotlib.pyplot as plt

[ 393 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!