09.05.2023 Views

pdfcoffee

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

TensorFlow for Mobile and IoT and TensorFlow.js

Supported platforms

On Android, TensorFlow Lite inference can be performed using either Java or C++.

On iOS, TensorFlow Lite inference can run in Swift and Objective-C. On Linux

platforms (such as Raspberry Pi), inferences run in C++ and Python. TensorFlow

Lite for microcontrollers is an experimental port of TensorFlow Lite designed

to run machine learning models on microcontrollers based on Arm Cortex-M

(https://developer.arm.com/ip-products/processors/cortex-m) Series

processors including Arduino Nano 33 BLE Sense (https://store.arduino.

cc/usa/nano-33-ble-sense-with-headers), SparkFun Edge (https://www.

sparkfun.com/products/15170), and the STM32F746 Discovery kit (https://www.

st.com/en/evaluation-tools/32f746gdiscovery.html). These microcontrollers

are frequently used for IoT applications.

Architecture

The architecture of TensorFlow Lite is described in Figure 2 (from https://

www.tensorflow.org/lite/convert/index). As you can see, both tf.keras

(for example, TensorFlow 2.x) and Low-level APIs are supported. A standard

TensorFlow 2.x model can be converted by using TFLite Converter and then

saved in a TFLite FlatBuffer format (named .tflite), which is then executed by

the TFLite interpreter on available devices (GPUs, CPUs) and on native device

APIs. The concrete function in Figure 2 defines a graph that can be converted to a

TensorFlow Lite model or be exported to a SavedModel.

Using TensorFlow Lite

Using TensorFlow Lite involves the following steps:

1. Model selection: A standard TensorFlow 2.x model is selected for solving a

specific task. This can be either a custom-built model or a pretrained model.

2. Model conversion: The selected model is converted with the TensorFlow

Lite converter, generally invoked with a few lines of Python code.

3. Model deployment: The converted model is deployed on the chosen device,

either a phone or an IoT device and then run by using the TensorFlow Lite

interpreter. As discussed, APIs are available for multiple languages.

4. Model optimization: The model can be optionally optimized by using the

TensorFlow Lite optimization framework:

[ 464 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!