09.05.2023 Views

pdfcoffee

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Unsupervised Learning

Let us try stacking our RBM class. To be able to make the DBN we will need to define

one more function in the RBM class, the output of the hidden of one RBM needs to be

fed to the next RBM:

#Create expected output for our DBN

def rbm_output(self, X):

out = tf.nn.sigmoid(tf.matmul(X, self.w) + self.hb)

return out

Now we can just use the RBM class to create a stacked RBM structure. In the

following code we create an RBM stack: the first RBM will have 500 hidden units,

the second will have 200 hidden units, and the third will have 50 hidden units:

RBM_hidden_sizes = [500, 200 , 50 ] #create 2 layers of RBM with size

400 and 100

#Since we are training, set input as training data

inpX = train_data

#Create list to hold our RBMs

rbm_list = []

#Size of inputs is the number of inputs in the training set

input_size = train_data.shape[1]

#For each RBM we want to generate

for i, size in enumerate(RBM_hidden_sizes):

print ('RBM: ',i,' ',input_size,'->', size)

rbm_list.append(RBM(input_size, size))

input_size = size

---------------------------------------------------------------------

RBM: 0 784 -> 500

RBM: 1 500 -> 200

RBM: 2 200 -> 50

For the first RBM, the MNIST data is the input. The output of the first RBM is then

fed as input to the second RBM, and so on through the consecutive RBM layers:

#For each RBM in our list

for rbm in rbm_list:

print ('New RBM:')

#Train a new one

rbm.train(tf.cast(inpX,tf.float32))

#Return the output layer

[ 398 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!