pdfcoffee
Chapter 1Here's how the code is written in TensorFlow 2.0 to achieve the same results:import tensorflow as tfW = tf.Variable(tf.ones(shape=(2,2)), name="W")b = tf.Variable(tf.zeros(shape=(2)), name="b")@tf.functiondef model(x):return W * x + bout_a = model([1,0])print(out_a)In this case, we have eight lines in total and the code looks cleaner and nicer. Indeed,the key idea of TensorFlow 2.0 is to make TensorFlow easier to learn and to apply.If you have started with TensorFlow 2.0 and have never seen TensorFlow 1.x, thenyou are lucky. If you are already familiar with 1.x, then it is important to understandthe differences and you need to be ready to rewrite your code with some help fromautomatic tools for migration, as discussed in Chapter 2, TensorFlow 1.x and 2.x. Beforethat, let's start by introducing neural networks–one of the most powerful learningparadigms supported by TensorFlow.Introduction to neural networksArtificial neural networks (briefly, "nets" or ANNs) represent a class of machinelearning models loosely inspired by studies about the central nervous systems ofmammals. Each ANN is made up of several interconnected "neurons," organized in"layers." Neurons in one layer pass messages to neurons in the next layer (they "fire,"in jargon terms) and this is how the network computes things. Initial studies werestarted in the early 50's with the introduction of the "perceptron" [1], a two-layernetwork used for simple operations, and further expanded in the late 60's with theintroduction of the "back-propagation" algorithm used for efficient multi-layernetwork training (according to [2], [3]).Some studies argue that these techniques have roots dating furtherback than normally cited[4].Neural networks were a topic of intensive academic studies up until the 80's, atwhich point other, simpler approaches became more relevant. However, therehas been a resurgence of interest starting in the mid 2000's, mainly thanks to threefactors: a breakthrough fast learning algorithm proposed by G. Hinton [3], [5], [6];the introduction of GPUs around 2011 for massive numeric computation; and theavailability of big collections of data for training.[ 5 ]
Neural Network Foundations with TensorFlow 2.0These improvements opened the route for modern "deep learning," a class of neuralnetworks characterized by a significant number of layers of neurons that are able tolearn rather sophisticated models based on progressive levels of abstraction. Peoplebegan referring to it as "deep" when it started utilizing 3-5 layers a few years ago.Now, networks with more than 200 layers are commonplace!This learning via progressive abstraction resembles vision models that have evolvedover millions of years within the human brain. Indeed, the human visual system isorganized into different layers. First, our eyes are connected to an area of the brainnamed the visual cortex (V1), which is located in the lower posterior part of ourbrain. This area is common to many mammals and has the role of discriminatingbasic properties like small changes in visual orientation, spatial frequencies, andcolors.It has been estimated that V1 consists of about 140 million neurons, with tens ofbillions of connections between them. V1 is then connected to other areas (V2,V3, V4, V5, and V6) doing progressively more complex image processing andrecognizing more sophisticated concepts, such as shapes, faces, animals, and manymore. It has been estimated that there are ~16 billion human cortical neurons andabout 10-25% of the human cortex is devoted to vision [7]. Deep learning has takensome inspiration from this layer-based organization of the human visual system:early artificial neuron layers learn basic properties of images while deeper layerslearn more sophisticated concepts.This book covers several major aspects of neural networks by providing workingnets in TensorFlow 2.0. So, let's start!PerceptronThe "perceptron" is a simple algorithm that, given an input vector x of m values (x 1,x 2,..., x m), often called input features or simply features, outputs eithera 1 ("yes") or a 0 ("no"). Mathematically, we define a function:1 wwww + bb > 0ff(xx) = {0 ooooheeeeeeeeeeeeWhere w is a vector of weights, wx is the dot product ∑ jj=1mm ww jj xx jj and b is bias. If youremember elementary geometry, wx + b defines a boundary hyperplane that changesposition according to the values assigned to w and b.[ 6 ]
- Page 2 and 3: Deep Learning withTensorFlow 2 and
- Page 4 and 5: packt.comSubscribe to our online di
- Page 6 and 7: I want to thank my kids, Aurora, Le
- Page 8 and 9: Sujit Pal is a Technology Research
- Page 10 and 11: Table of ContentsPrefacexiChapter 1
- Page 12 and 13: [ iii ]Table of ContentsConverting
- Page 14 and 15: Table of ContentsSo what is the pro
- Page 16 and 17: [ vii ]Table of ContentsChapter 10:
- Page 18 and 19: Table of ContentsPretrained models
- Page 20 and 21: PrefaceDeep Learning with TensorFlo
- Page 22 and 23: • Supervised learning, in which t
- Page 24 and 25: PrefaceThe complexity of deep learn
- Page 26 and 27: PrefaceFigure 5: Adoption of deep l
- Page 28 and 29: Chapter 1, Neural Network Foundatio
- Page 30 and 31: PrefaceChapter 13, TensorFlow for M
- Page 32 and 33: ConventionsThere are a number of te
- Page 34: PrefaceReferences1. Deep Learning w
- Page 37 and 38: Neural Network Foundations with Ten
- Page 39: Neural Network Foundations with Ten
- Page 43 and 44: Neural Network Foundations with Ten
- Page 45 and 46: Neural Network Foundations with Ten
- Page 47 and 48: Neural Network Foundations with Ten
- Page 49 and 50: Neural Network Foundations with Ten
- Page 51 and 52: Neural Network Foundations with Ten
- Page 53 and 54: Neural Network Foundations with Ten
- Page 55 and 56: Neural Network Foundations with Ten
- Page 57 and 58: Neural Network Foundations with Ten
- Page 59 and 60: Neural Network Foundations with Ten
- Page 61 and 62: Neural Network Foundations with Ten
- Page 63 and 64: Neural Network Foundations with Ten
- Page 65 and 66: Neural Network Foundations with Ten
- Page 67 and 68: Neural Network Foundations with Ten
- Page 69 and 70: Neural Network Foundations with Ten
- Page 71 and 72: Neural Network Foundations with Ten
- Page 73 and 74: Neural Network Foundations with Ten
- Page 75 and 76: Neural Network Foundations with Ten
- Page 77 and 78: Neural Network Foundations with Ten
- Page 79 and 80: Neural Network Foundations with Ten
- Page 81 and 82: Neural Network Foundations with Ten
- Page 83 and 84: Neural Network Foundations with Ten
- Page 86 and 87: TensorFlow 1.x and 2.xThe intent of
- Page 88 and 89: An example to start withWe'll consi
Chapter 1
Here's how the code is written in TensorFlow 2.0 to achieve the same results:
import tensorflow as tf
W = tf.Variable(tf.ones(shape=(2,2)), name="W")
b = tf.Variable(tf.zeros(shape=(2)), name="b")
@tf.function
def model(x):
return W * x + b
out_a = model([1,0])
print(out_a)
In this case, we have eight lines in total and the code looks cleaner and nicer. Indeed,
the key idea of TensorFlow 2.0 is to make TensorFlow easier to learn and to apply.
If you have started with TensorFlow 2.0 and have never seen TensorFlow 1.x, then
you are lucky. If you are already familiar with 1.x, then it is important to understand
the differences and you need to be ready to rewrite your code with some help from
automatic tools for migration, as discussed in Chapter 2, TensorFlow 1.x and 2.x. Before
that, let's start by introducing neural networks–one of the most powerful learning
paradigms supported by TensorFlow.
Introduction to neural networks
Artificial neural networks (briefly, "nets" or ANNs) represent a class of machine
learning models loosely inspired by studies about the central nervous systems of
mammals. Each ANN is made up of several interconnected "neurons," organized in
"layers." Neurons in one layer pass messages to neurons in the next layer (they "fire,"
in jargon terms) and this is how the network computes things. Initial studies were
started in the early 50's with the introduction of the "perceptron" [1], a two-layer
network used for simple operations, and further expanded in the late 60's with the
introduction of the "back-propagation" algorithm used for efficient multi-layer
network training (according to [2], [3]).
Some studies argue that these techniques have roots dating further
back than normally cited[4].
Neural networks were a topic of intensive academic studies up until the 80's, at
which point other, simpler approaches became more relevant. However, there
has been a resurgence of interest starting in the mid 2000's, mainly thanks to three
factors: a breakthrough fast learning algorithm proposed by G. Hinton [3], [5], [6];
the introduction of GPUs around 2011 for massive numeric computation; and the
availability of big collections of data for training.
[ 5 ]