pdfcoffee

soumyasankar99
from soumyasankar99 More from this publisher
09.05.2023 Views

An example to start withWe'll consider a simple example of adding two vectors. The graph we want tobuild is:Chapter 2The corresponding code to define the computational graph is:v_1 = tf.constant([1,2,3,4])v_2 = tf.constant([2,1,5,3])v_add = tf.add(v_1,v_2) # You can also write v_1 + v_2 insteadNext, we execute the graph in the session:orwith tf.Session() as sess:print(sess.run(v_add))sess = tf.Session()print(sess.run(v_add))sess.close()This results in printing the sum of two vectors:[3 3 8 7]Remember, each session needs to be explicitly closed using close().The building of a computational graph is very simple – you go on adding thevariables and operations and passing them through (flow the tensors). In this wayyou build your neural network layer by layer. TensorFlow also allows you to usespecific devices (CPU/GPU) with different objects of the computational graph usingtf.device(). In our example, the computational graph consists of three nodes, v_1and v_2 representing the two vectors, and v_add, the operation to be performed onthem. Now to bring this graph to life we first need to define a session object usingtf.Session(). We named our session object sess. Next, we run it using the runmethod defined in the Session class as:run (fetches, feed_dict=None, options=None, run_metadata)[ 53 ]

TensorFlow 1.x and 2.xThis evaluates the tensor in the fetches parameter. Our example has tensor v_add infetches. The run method will execute every tensor and every operation in the graphthat leads to v_add. If instead of v_add you have v_1 in fetches, the result will be thevalue of vector v_1:[1,2,3,4]fetches can be a single tensor or operation object, or can be more than one.For example, if fetches contains [v_1, v_2, v_add], the output is:[array([1, 2, 3, 4]), array([2, 1, 5, 3]), array([3, 3, 8, 7])]We can have many session objects within the same program code. In this section, wehave seen an example of TensorFlow 1.x computational graph program structure.The next section will give more insights into TensorFlow 1.x programmingconstructs.Working with constants, variables, andplaceholdersTensorFlow, in simplest terms, provides a library to define and perform differentmathematical operations with tensors. A tensor is basically an n-dimensional array.All types of data – that is, scalar, vectors, and matrices – are special types of tensors:Types of Data Tensor ShapeScalar 0-D Tensor []Vector 1-D Tensor [D 0]Matrix 2-D Tensor [D 0,D 1]Tensors N-D Tensor [D 0,D 1,....D n-1]TensorFlow supports three types of tensors:1. Constants: Constants are tensors, the values of which cannot be changed.2. Variables: We use variable tensors when values require updating within asession. For example, in the case of neural networks, the weights need to beupdated during the training session; this is achieved by declaring weightsas variables. Variables need to be explicitly initialized before use. Anotherimportant thing to note is that constants are stored in a computational graphdefinition and they are loaded every time the graph is loaded, so they arememory-intensive. Variables, on the other hand, are stored separately; theycan exist on parameter servers.[ 54 ]

TensorFlow 1.x and 2.x

This evaluates the tensor in the fetches parameter. Our example has tensor v_add in

fetches. The run method will execute every tensor and every operation in the graph

that leads to v_add. If instead of v_add you have v_1 in fetches, the result will be the

value of vector v_1:

[1,2,3,4]

fetches can be a single tensor or operation object, or can be more than one.

For example, if fetches contains [v_1, v_2, v_add], the output is:

[array([1, 2, 3, 4]), array([2, 1, 5, 3]), array([3, 3, 8, 7])]

We can have many session objects within the same program code. In this section, we

have seen an example of TensorFlow 1.x computational graph program structure.

The next section will give more insights into TensorFlow 1.x programming

constructs.

Working with constants, variables, and

placeholders

TensorFlow, in simplest terms, provides a library to define and perform different

mathematical operations with tensors. A tensor is basically an n-dimensional array.

All types of data – that is, scalar, vectors, and matrices – are special types of tensors:

Types of Data Tensor Shape

Scalar 0-D Tensor []

Vector 1-D Tensor [D 0

]

Matrix 2-D Tensor [D 0

,D 1

]

Tensors N-D Tensor [D 0

,D 1

,....D n-1

]

TensorFlow supports three types of tensors:

1. Constants: Constants are tensors, the values of which cannot be changed.

2. Variables: We use variable tensors when values require updating within a

session. For example, in the case of neural networks, the weights need to be

updated during the training session; this is achieved by declaring weights

as variables. Variables need to be explicitly initialized before use. Another

important thing to note is that constants are stored in a computational graph

definition and they are loaded every time the graph is loaded, so they are

memory-intensive. Variables, on the other hand, are stored separately; they

can exist on parameter servers.

[ 54 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!