09.05.2023 Views

pdfcoffee

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Autoencoders

epoch_loss = 0

for step, batch_features in enumerate(dataset):

loss_values = train(loss, model, opt, batch_features)

epoch_loss += loss_values

model.loss.append(epoch_loss)

print('Epoch {}/{}. Loss: {}'.format(epoch + 1, epochs, epoch_

loss.numpy()))

Let us now train our autoencoder:

train_loop(autoencoder, opt, loss, training_dataset, epochs=max_

epochs)

The training graph is shown as follows. We can see that loss/cost is decreasing as

the network learns and after 50 epochs it is almost constant about a line. This means

further increasing the number of epochs will not be useful. If we want to improve

our training further, we should change the hyperparameters like learning rate and

batch_size:

plt.plot(range(max_epochs), autoencoder.loss)

plt.xlabel('Epochs')

plt.ylabel('Loss')

plt.show()

In the following figure, you can see the original (top) and reconstructed (bottom)

images; they are slightly blurred, but accurate:

number = 10 # how many digits we will display

plt.figure(figsize=(20, 4))

for index in range(number):

# display original

ax = plt.subplot(2, number, index + 1)

[ 352 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!