Tutorial: Deploying TensorFlow models at the edge with NVIDIA Jetson Nano and K3s

Click for: original source

Janakiram MSV put together this tutorial about TensorFlow models. We will explore the idea of running TensorFlow models as microservices at the edge. Jetson Nano, a powerful edge computing device will run the K3s distribution from Rancher Labs. It can be a single node K3s cluster or join an existing K3s cluster just as an agent.

The Jetson platform from NVIDIA runs a flavor of Debian called L4T (Linux for Tegra) which is based on Ubuntu 18.04. The OS along with the CUDA-X drivers and SDKs is packaged into JetPack, a comprehensive software stack for the Jetson family of products such as Jetson Nano and Jetson Xavier.

Accessing the GPU from a K3s cluster through custom Docker runtime is a powerful mechanism to run AI at the edge in a cloud native environment. With TensorFlow running at the edge within Kubernetes, you can deploy deep learning models as microservices.

The tutorial is split into:

  • Configure Docker runtime
  • Install K3s on Jetson Nano

You will also get code examples and screen shots to guide you through tutorial. Well done!

[Read More]

Tags containers data-science kubernetes devops docker