09.05.2023 Views

pdfcoffee

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Chapter 13

In this section, we have discussed how to use TensorFlow.js with both vanilla

JavaScript and with Node.js with sample applications for both the browser and for

backend computation.

Summary

In this chapter we have discussed how to use TensorFlow Lite for mobile devices

and IoT and deployed real applications on Android devices. Then, we also talked

about Federated Learning for distributed learning across thousands (millions) of

mobile devices, taking into account privacy concerns. The last section of the chapter

was devoted to TensorFlow.js for using TensorFlow with vanilla JavaScript or with

Node.js.

The next chapter is about AutoML, a set of techniques used to enable domain

experts who are unfamiliar with machine learning technologies to use ML

techniques easily.

References

1. Quantization-aware training https://github.com/tensorflow/tensorflow/

tree/r1.13/tensorflow/contrib/quantize

2. Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-

Only Inference, Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu,

Matthew Tang, Andrew Howard, Hartwig Adam, Dmitry Kalenichenko

(Submitted on 15 Dec 2017); https://arxiv.org/abs/1712.05877

3. MobileNetV2: Inverted Residuals and Linear Bottlenecks, Mark Sandler, Andrew

Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen (Submitted

on 13 Jan 2018 (v1), last revised 21 Mar 2019 (v4)) https://arxiv.org/

abs/1806.08342

4. MnasNet: Platform-Aware Neural Architecture Search for Mobile, Mingxing Tan,

Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard,

Quoc V. Le https://arxiv.org/abs/1807.11626

5. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets,

Atrous Convolution, and Fully Connected CRFs, Liang-Chieh Chen, George

Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille, May

2017, https://arxiv.org/pdf/1606.00915.pdf

6. BERT: Pre-training of Deep Bidirectional Transformers for Language

Understanding, Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina

Toutanova (Submitted on 11 Oct 2018 (v1), last revised 24 May 2019 v2))

https://arxiv.org/abs/1810.04805

[ 489 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!