pdfcoffee

soumyasankar99
from soumyasankar99 More from this publisher
09.05.2023 Views

Chapter 1Figure 25: Accuracy for different models and optimizersControlling the optimizer learning rateThere is another approach we can take that involves changing the learning parameterfor our optimizer. As you can see in Figure 26, the best value reached by our threeexperiments [lr=0.1, lr=0.01, lr=0.001] is 0.1, which is the default learning rate forthe optimizer. Good! adam works well out of the box:Figure 26: Accuracy for different learning rates[ 33 ]

Neural Network Foundations with TensorFlow 2.0Increasing the number of internal hiddenneuronsYet another approach involves changing the number of internal hidden neurons. Wereport the results of the experiments with an increasing number of hidden neurons.We see that by increasing the complexity of the model, the runtime increasessignificantly because there are more and more parameters to optimize. However,the gains that we are getting by increasing the size of the network decrease moreand more as the network grows (see Figures 27, 28, and 29). Note that increasing thenumber of hidden neurons after a certain value can reduce the accuracy becausethe network might not be able to generalize well (as shown in Figure 29):Figure 27: Number of parameters for increasing values of internal hidden neuronsFigure 28: Seconds of computation time for increasing values of internal hidden neurons[ 34 ]

Chapter 1

Figure 25: Accuracy for different models and optimizers

Controlling the optimizer learning rate

There is another approach we can take that involves changing the learning parameter

for our optimizer. As you can see in Figure 26, the best value reached by our three

experiments [lr=0.1, lr=0.01, lr=0.001] is 0.1, which is the default learning rate for

the optimizer. Good! adam works well out of the box:

Figure 26: Accuracy for different learning rates

[ 33 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!