pdfcoffee

soumyasankar99
from soumyasankar99 More from this publisher
09.05.2023 Views

Chapter 1Here's how the code is written in TensorFlow 2.0 to achieve the same results:import tensorflow as tfW = tf.Variable(tf.ones(shape=(2,2)), name="W")b = tf.Variable(tf.zeros(shape=(2)), name="b")@tf.functiondef model(x):return W * x + bout_a = model([1,0])print(out_a)In this case, we have eight lines in total and the code looks cleaner and nicer. Indeed,the key idea of TensorFlow 2.0 is to make TensorFlow easier to learn and to apply.If you have started with TensorFlow 2.0 and have never seen TensorFlow 1.x, thenyou are lucky. If you are already familiar with 1.x, then it is important to understandthe differences and you need to be ready to rewrite your code with some help fromautomatic tools for migration, as discussed in Chapter 2, TensorFlow 1.x and 2.x. Beforethat, let's start by introducing neural networks–one of the most powerful learningparadigms supported by TensorFlow.Introduction to neural networksArtificial neural networks (briefly, "nets" or ANNs) represent a class of machinelearning models loosely inspired by studies about the central nervous systems ofmammals. Each ANN is made up of several interconnected "neurons," organized in"layers." Neurons in one layer pass messages to neurons in the next layer (they "fire,"in jargon terms) and this is how the network computes things. Initial studies werestarted in the early 50's with the introduction of the "perceptron" [1], a two-layernetwork used for simple operations, and further expanded in the late 60's with theintroduction of the "back-propagation" algorithm used for efficient multi-layernetwork training (according to [2], [3]).Some studies argue that these techniques have roots dating furtherback than normally cited[4].Neural networks were a topic of intensive academic studies up until the 80's, atwhich point other, simpler approaches became more relevant. However, therehas been a resurgence of interest starting in the mid 2000's, mainly thanks to threefactors: a breakthrough fast learning algorithm proposed by G. Hinton [3], [5], [6];the introduction of GPUs around 2011 for massive numeric computation; and theavailability of big collections of data for training.[ 5 ]

Neural Network Foundations with TensorFlow 2.0These improvements opened the route for modern "deep learning," a class of neuralnetworks characterized by a significant number of layers of neurons that are able tolearn rather sophisticated models based on progressive levels of abstraction. Peoplebegan referring to it as "deep" when it started utilizing 3-5 layers a few years ago.Now, networks with more than 200 layers are commonplace!This learning via progressive abstraction resembles vision models that have evolvedover millions of years within the human brain. Indeed, the human visual system isorganized into different layers. First, our eyes are connected to an area of the brainnamed the visual cortex (V1), which is located in the lower posterior part of ourbrain. This area is common to many mammals and has the role of discriminatingbasic properties like small changes in visual orientation, spatial frequencies, andcolors.It has been estimated that V1 consists of about 140 million neurons, with tens ofbillions of connections between them. V1 is then connected to other areas (V2,V3, V4, V5, and V6) doing progressively more complex image processing andrecognizing more sophisticated concepts, such as shapes, faces, animals, and manymore. It has been estimated that there are ~16 billion human cortical neurons andabout 10-25% of the human cortex is devoted to vision [7]. Deep learning has takensome inspiration from this layer-based organization of the human visual system:early artificial neuron layers learn basic properties of images while deeper layerslearn more sophisticated concepts.This book covers several major aspects of neural networks by providing workingnets in TensorFlow 2.0. So, let's start!PerceptronThe "perceptron" is a simple algorithm that, given an input vector x of m values (x 1,x 2,..., x m), often called input features or simply features, outputs eithera 1 ("yes") or a 0 ("no"). Mathematically, we define a function:1 wwww + bb > 0ff(xx) = {0 ooooheeeeeeeeeeeeWhere w is a vector of weights, wx is the dot product ∑ jj=1mm ww jj xx jj and b is bias. If youremember elementary geometry, wx + b defines a boundary hyperplane that changesposition according to the values assigned to w and b.[ 6 ]

Chapter 1

Here's how the code is written in TensorFlow 2.0 to achieve the same results:

import tensorflow as tf

W = tf.Variable(tf.ones(shape=(2,2)), name="W")

b = tf.Variable(tf.zeros(shape=(2)), name="b")

@tf.function

def model(x):

return W * x + b

out_a = model([1,0])

print(out_a)

In this case, we have eight lines in total and the code looks cleaner and nicer. Indeed,

the key idea of TensorFlow 2.0 is to make TensorFlow easier to learn and to apply.

If you have started with TensorFlow 2.0 and have never seen TensorFlow 1.x, then

you are lucky. If you are already familiar with 1.x, then it is important to understand

the differences and you need to be ready to rewrite your code with some help from

automatic tools for migration, as discussed in Chapter 2, TensorFlow 1.x and 2.x. Before

that, let's start by introducing neural networks–one of the most powerful learning

paradigms supported by TensorFlow.

Introduction to neural networks

Artificial neural networks (briefly, "nets" or ANNs) represent a class of machine

learning models loosely inspired by studies about the central nervous systems of

mammals. Each ANN is made up of several interconnected "neurons," organized in

"layers." Neurons in one layer pass messages to neurons in the next layer (they "fire,"

in jargon terms) and this is how the network computes things. Initial studies were

started in the early 50's with the introduction of the "perceptron" [1], a two-layer

network used for simple operations, and further expanded in the late 60's with the

introduction of the "back-propagation" algorithm used for efficient multi-layer

network training (according to [2], [3]).

Some studies argue that these techniques have roots dating further

back than normally cited[4].

Neural networks were a topic of intensive academic studies up until the 80's, at

which point other, simpler approaches became more relevant. However, there

has been a resurgence of interest starting in the mid 2000's, mainly thanks to three

factors: a breakthrough fast learning algorithm proposed by G. Hinton [3], [5], [6];

the introduction of GPUs around 2011 for massive numeric computation; and the

availability of big collections of data for training.

[ 5 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!