09.05.2023 Views

pdfcoffee

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Chapter 13

Figure 2: TensorFlow Lite internal architecture

A generic example of application

In this section we are going to see how to convert a model to TensorFlow Lite

and then run it. Note that training can still be performed by TensorFlow in the

environment that best fits your needs. However, inference runs on the mobile device.

Let's see how with the following code fragment in Python:

import tensorflow as tf

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)

tflite_model = converter.convert()

open("converted_model.tflite", "wb").write(tflite_model)

The code is self-explanatory. A standard TensorFlow 2.x model is opened and

converted by using tf.lite.TFLiteConverter.from_saved_model(saved_model_

dir). Pretty simple! Note that no specific installation is required. We simply use

the tf.lite API (https://www.tensorflow.org/api_docs/python/tf/lite).

It is also possible to apply a number of optimizations. For instance, post-training

quantization can be applied by default:

import tensorflow as tf

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)

converter.optimizations = [tf.lite.Optimize.DEFAULT]

tflite_quant_model = converter.convert()

open("converted_model.tflite", "wb").write(tflite_quantized_model)

[ 465 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!