pdfcoffee

soumyasankar99
from soumyasankar99 More from this publisher
09.05.2023 Views

To know the value, we need to create the session graph and explicitly use the runcommand with the desired tensor values as done with the following code:Chapter 2print(sess.run(t_1)) # Will print the value of t_1 defined in step 1An example of TensorFlow 1.x inTensorFlow 2.xWe can see that the TensorFlow 1.x APIs offer flexible ways to create and manipulatecomputational graphs representing (deep) neural networks and many other typesof machine learning programs. TensorFlow 2.x on the other hand offers higher-levelAPIs that abstract more lower-level details. To conclude, let's go back to an exampleof the TensorFlow 1.x program that we encountered in the previous chapter. Here,we also add a line to show the computational graph:import tensorflow.compat.v1 as tftf.disable_v2_behavior()in_a = tf.placeholder(dtype=tf.float32, shape=(2))def model(x):with tf.variable_scope("matmul"):W = tf.get_variable("W", initializer=tf.ones(shape=(2,2)))b = tf.get_variable("b", initializer=tf.zeros(shape=(2)))return x * W + bout_a = model(in_a)with tf.Session() as sess:sess.run(tf.global_variables_initializer())outs = sess.run([out_a],feed_dict={in_a: [1, 0]})writer = tf.summary.FileWriter("./logs/example", sess.graph)Please note that the syntax x * W + b is just the linear perceptron defined in theprevious chapter. Now let's start a visualization application called "TensorBoard"to show the computational graph:tensorboard --logdir=./logs/example/And let's open a browser pointing to http://localhost:6006/#graphs&run=.[ 59 ]

TensorFlow 1.x and 2.xYou should see something similar to the following graph:Figure 1: An example of a computational graphThis section provided an overview of TensorFlow 1.x programming paradigms.Now, let's turn our attention to what is new in TensorFlow 2.x.Understanding TensorFlow 2.xAs discussed, TensorFlow 2.x recommends using a high-level API such as tf.keras,but leaves low-level APIs typical of TensorFlow 1.x for when there is a need to havemore control on internal details. tf.keras and TensorFlow 2.x come with some greatbenefits. Let's review them.Eager executionTensorFlow 1.x defines static computational graphs. This type of declarativeprogramming might be confusing for many people. However, Python is typicallymore dynamic. So, following the Python spirit, PyTorch, another popular deeplearning package, defines things in a more imperative and dynamic way: youstill have a graph, but you can define, change, and execute nodes on-the-fly, withno special session interfaces or placeholders. This is what is called eager execution,meaning that the model definitions are dynamic, and the execution is immediate.Graphs and sessions should be considered as implementation details.[ 60 ]

To know the value, we need to create the session graph and explicitly use the run

command with the desired tensor values as done with the following code:

Chapter 2

print(sess.run(t_1)) # Will print the value of t_1 defined in step 1

An example of TensorFlow 1.x in

TensorFlow 2.x

We can see that the TensorFlow 1.x APIs offer flexible ways to create and manipulate

computational graphs representing (deep) neural networks and many other types

of machine learning programs. TensorFlow 2.x on the other hand offers higher-level

APIs that abstract more lower-level details. To conclude, let's go back to an example

of the TensorFlow 1.x program that we encountered in the previous chapter. Here,

we also add a line to show the computational graph:

import tensorflow.compat.v1 as tf

tf.disable_v2_behavior()

in_a = tf.placeholder(dtype=tf.float32, shape=(2))

def model(x):

with tf.variable_scope("matmul"):

W = tf.get_variable("W", initializer=tf.ones(shape=(2,2)))

b = tf.get_variable("b", initializer=tf.zeros(shape=(2)))

return x * W + b

out_a = model(in_a)

with tf.Session() as sess:

sess.run(tf.global_variables_initializer())

outs = sess.run([out_a],

feed_dict={in_a: [1, 0]})

writer = tf.summary.FileWriter("./logs/example", sess.graph)

Please note that the syntax x * W + b is just the linear perceptron defined in the

previous chapter. Now let's start a visualization application called "TensorBoard"

to show the computational graph:

tensorboard --logdir=./logs/example/

And let's open a browser pointing to http://localhost:6006/#graphs&run=.

[ 59 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!