pdfcoffee
An example to start withWe'll consider a simple example of adding two vectors. The graph we want tobuild is:Chapter 2The corresponding code to define the computational graph is:v_1 = tf.constant([1,2,3,4])v_2 = tf.constant([2,1,5,3])v_add = tf.add(v_1,v_2) # You can also write v_1 + v_2 insteadNext, we execute the graph in the session:orwith tf.Session() as sess:print(sess.run(v_add))sess = tf.Session()print(sess.run(v_add))sess.close()This results in printing the sum of two vectors:[3 3 8 7]Remember, each session needs to be explicitly closed using close().The building of a computational graph is very simple – you go on adding thevariables and operations and passing them through (flow the tensors). In this wayyou build your neural network layer by layer. TensorFlow also allows you to usespecific devices (CPU/GPU) with different objects of the computational graph usingtf.device(). In our example, the computational graph consists of three nodes, v_1and v_2 representing the two vectors, and v_add, the operation to be performed onthem. Now to bring this graph to life we first need to define a session object usingtf.Session(). We named our session object sess. Next, we run it using the runmethod defined in the Session class as:run (fetches, feed_dict=None, options=None, run_metadata)[ 53 ]
TensorFlow 1.x and 2.xThis evaluates the tensor in the fetches parameter. Our example has tensor v_add infetches. The run method will execute every tensor and every operation in the graphthat leads to v_add. If instead of v_add you have v_1 in fetches, the result will be thevalue of vector v_1:[1,2,3,4]fetches can be a single tensor or operation object, or can be more than one.For example, if fetches contains [v_1, v_2, v_add], the output is:[array([1, 2, 3, 4]), array([2, 1, 5, 3]), array([3, 3, 8, 7])]We can have many session objects within the same program code. In this section, wehave seen an example of TensorFlow 1.x computational graph program structure.The next section will give more insights into TensorFlow 1.x programmingconstructs.Working with constants, variables, andplaceholdersTensorFlow, in simplest terms, provides a library to define and perform differentmathematical operations with tensors. A tensor is basically an n-dimensional array.All types of data – that is, scalar, vectors, and matrices – are special types of tensors:Types of Data Tensor ShapeScalar 0-D Tensor []Vector 1-D Tensor [D 0]Matrix 2-D Tensor [D 0,D 1]Tensors N-D Tensor [D 0,D 1,....D n-1]TensorFlow supports three types of tensors:1. Constants: Constants are tensors, the values of which cannot be changed.2. Variables: We use variable tensors when values require updating within asession. For example, in the case of neural networks, the weights need to beupdated during the training session; this is achieved by declaring weightsas variables. Variables need to be explicitly initialized before use. Anotherimportant thing to note is that constants are stored in a computational graphdefinition and they are loaded every time the graph is loaded, so they arememory-intensive. Variables, on the other hand, are stored separately; theycan exist on parameter servers.[ 54 ]
- Page 37 and 38: Neural Network Foundations with Ten
- Page 39 and 40: Neural Network Foundations with Ten
- Page 41 and 42: Neural Network Foundations with Ten
- Page 43 and 44: Neural Network Foundations with Ten
- Page 45 and 46: Neural Network Foundations with Ten
- Page 47 and 48: Neural Network Foundations with Ten
- Page 49 and 50: Neural Network Foundations with Ten
- Page 51 and 52: Neural Network Foundations with Ten
- Page 53 and 54: Neural Network Foundations with Ten
- Page 55 and 56: Neural Network Foundations with Ten
- Page 57 and 58: Neural Network Foundations with Ten
- Page 59 and 60: Neural Network Foundations with Ten
- Page 61 and 62: Neural Network Foundations with Ten
- Page 63 and 64: Neural Network Foundations with Ten
- Page 65 and 66: Neural Network Foundations with Ten
- Page 67 and 68: Neural Network Foundations with Ten
- Page 69 and 70: Neural Network Foundations with Ten
- Page 71 and 72: Neural Network Foundations with Ten
- Page 73 and 74: Neural Network Foundations with Ten
- Page 75 and 76: Neural Network Foundations with Ten
- Page 77 and 78: Neural Network Foundations with Ten
- Page 79 and 80: Neural Network Foundations with Ten
- Page 81 and 82: Neural Network Foundations with Ten
- Page 83 and 84: Neural Network Foundations with Ten
- Page 86 and 87: TensorFlow 1.x and 2.xThe intent of
- Page 90 and 91: Chapter 23. Placeholders: Placehold
- Page 92 and 93: • To create random values from a
- Page 94 and 95: To know the value, we need to creat
- Page 96 and 97: Chapter 2Both PyTorch and TensorFlo
- Page 98 and 99: Chapter 2state = [tf.zeros([100, 10
- Page 100 and 101: Chapter 2For now, there's no need t
- Page 102 and 103: Chapter 2Let's see an example of a
- Page 104 and 105: Chapter 2If you want to save a mode
- Page 106 and 107: Chapter 2supervised=True)train_data
- Page 108 and 109: Chapter 2There, tf.feature_column.n
- Page 110 and 111: Chapter 2print (dz_dx)print (dy_dx)
- Page 112 and 113: Chapter 2In our toy example we use
- Page 114 and 115: Chapter 2For multi-machine training
- Page 116 and 117: Chapter 25. Use tf.layers modules t
- Page 118 and 119: Chapter 2Keras or tf.keras?Another
- Page 120: • tf.data can be used to load mod
- Page 123 and 124: RegressionLet us imagine a simpler
- Page 125 and 126: RegressionTake a look at the last t
- Page 127 and 128: Regression3. Now, we calculate the
- Page 129 and 130: RegressionIn the next section we wi
- Page 131 and 132: Regression2. Now, we define the fea
- Page 133 and 134: Regression2. Download the dataset:(
- Page 135 and 136: RegressionThe following is the Tens
- Page 137 and 138: RegressionIn regression the aim is
TensorFlow 1.x and 2.x
This evaluates the tensor in the fetches parameter. Our example has tensor v_add in
fetches. The run method will execute every tensor and every operation in the graph
that leads to v_add. If instead of v_add you have v_1 in fetches, the result will be the
value of vector v_1:
[1,2,3,4]
fetches can be a single tensor or operation object, or can be more than one.
For example, if fetches contains [v_1, v_2, v_add], the output is:
[array([1, 2, 3, 4]), array([2, 1, 5, 3]), array([3, 3, 8, 7])]
We can have many session objects within the same program code. In this section, we
have seen an example of TensorFlow 1.x computational graph program structure.
The next section will give more insights into TensorFlow 1.x programming
constructs.
Working with constants, variables, and
placeholders
TensorFlow, in simplest terms, provides a library to define and perform different
mathematical operations with tensors. A tensor is basically an n-dimensional array.
All types of data – that is, scalar, vectors, and matrices – are special types of tensors:
Types of Data Tensor Shape
Scalar 0-D Tensor []
Vector 1-D Tensor [D 0
]
Matrix 2-D Tensor [D 0
,D 1
]
Tensors N-D Tensor [D 0
,D 1
,....D n-1
]
TensorFlow supports three types of tensors:
1. Constants: Constants are tensors, the values of which cannot be changed.
2. Variables: We use variable tensors when values require updating within a
session. For example, in the case of neural networks, the weights need to be
updated during the training session; this is achieved by declaring weights
as variables. Variables need to be explicitly initialized before use. Another
important thing to note is that constants are stored in a computational graph
definition and they are loaded every time the graph is loaded, so they are
memory-intensive. Variables, on the other hand, are stored separately; they
can exist on parameter servers.
[ 54 ]