09.05.2023 Views

pdfcoffee

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

TensorFlow 1.x and 2.x

Dataset uses TFRecord, a representation of the data (in any format) that can be easily

ported across multiple systems and is independent of the particular model used for

training. In short, dataset allows more flexibility than what we had in TensorFlow 1.0

with feed-dict.

tf.keras or Estimators?

In addition to the direct graph computation and to the tf.keras higher-level APIs,

TensorFlow 1.x and 2.x have an additional set of higher-level APIs called Estimators.

With Estimators, you do not need to worry about creating computational graphs or

handling sessions, since Estimators deal with this on your behalf, in a similar way

to tf.keras.

But what are Estimators? Put simply, they are another way to build or to use prebuilt

bricks. A longer answer is that they are highly efficient learning models for largescale

production-ready environments, which can be trained on single machines

or on distributed multi-servers, and they can run on CPUs, GPUs, or TPUs without

recoding your model. These models include Linear Classifiers, Deep Learning

Classifiers, Gradient Boosted Trees, and many more, which will be discussed in

the upcoming chapters.

Let's see an example of an Estimator used for building a classifier with 2 dense

hidden layers, each with 10 neurons, and with 3 output classes:

# Build a DNN with 2 hidden layers and 10 nodes in each hidden layer.

classifier = tf.estimator.DNNClassifier(

feature_columns=my_feature_columns,

# Two hidden layers of 10 nodes each.

hidden_units=[10, 10],

# The model must choose between 3 classes.

n_classes=3)

The feature_columns=my_feature_columns is a list of feature columns each

describing a single feature you want the model to use. For example, a typical use

would be something like:

# Fetch the data

(train_x, train_y), (test_x, test_y) = load_data()

# Feature columns describe how to use the input.

my_feature_columns = []

for key in train_x.keys():

my_feature_columns.append(tf.feature_column.numeric_column(key=key))

[ 72 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!