09.05.2023 Views

pdfcoffee

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 13

}

model.add(tf.layers.dense({

units: NUM_OUTPUT_CLASSES,

kernelInitializer: 'varianceScaling',

activation: 'softmax'

}));

const optimizer = tf.train.adam();

model.compile({

optimizer: optimizer,

loss: 'categoricalCrossentropy',

metrics: ['accuracy'],

});

return model;

The model is then trained for 10 epochs with batches from the training dataset

and validated inline using batches from the test dataset. Best practice is to create

a separate validation dataset from the training set. However, in order to keep our

focus on the more important aspect of showing how to use TensorFlow.js to design

an end-to-end DL pipeline, we are using the external data.js file provided by

Google, which provides functions to return only a training and a test batch. In our

example, we will use the test dataset for validation as well as evaluation later. This

is likely to give us better accuracies compared to what we would have achieved

with an unseen (during training) test set, but that is unimportant for an illustrative

example such as this one:

async function train(model, data) {

const metrics = ['loss', 'val_loss', 'acc', 'val_acc'];

const container = {

name: 'Model Training', styles: { height: '1000px' }

};

const fitCallbacks = tfvis.show.fitCallbacks(container, metrics);

const BATCH_SIZE = 512;

const TRAIN_DATA_SIZE = 5500;

const TEST_DATA_SIZE = 1000;

const [trainXs, trainYs] = tf.tidy(() => {

const d = data.nextTrainBatch(TRAIN_DATA_SIZE);

return [

d.xs.reshape([TRAIN_DATA_SIZE, 28, 28, 1]),

d.labels

];

});

[ 481 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!