09.05.2023 Views

pdfcoffee

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Neural Network Foundations with TensorFlow 2.0

Each neuron can be initialized with specific weights via the kernel_initializer

parameter. There are a few choices, the most common of which are listed as follows:

• random_uniform: Weights are initialized to uniformly random small values

in the range -0.05 to 0.05.

• random_normal: Weights are initialized according to a Gaussian distribution,

with zero mean and a small standard deviation of 0.05. For those of you who

are not familiar with Gaussian distribution, think about a symmetric "bell

curve" shape.

• zero: All weights are initialized to zero.

A full list is available online at https://www.tensorflow.org/api_docs/python/

tf/keras/initializers.

Multi-layer perceptron – our first example

of a network

In this chapter, we present our first example of a network with multiple dense layers.

Historically, "perceptron" was the name given to a model having one single linear

layer, and as a consequence, if it has multiple layers, you would call it a multi-layer

perceptron (MLP). Note that the input and the output layers are visible from outside,

while all the other layers in the middle are hidden – hence the name hidden layers.

In this context, a single layer is simply a linear function and the MLP is therefore

obtained by stacking multiple single layers one after the other:

Figure 4: An example of a multiple layer perceptron

[ 8 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!