Links

Raspberry Pi (On Device)

Deploy your Roboflow Train models to Raspberry Pi.
Our Hosted API is suitable for most use-cases; it uses battle-tested infrastructure and seamlessly autoscales up and down to handle even the most intense use-cases. But, because it is hosted remotely, there are some scenarios where it's not ideal. Our Raspberry Pi deployment option runs directly on your devices in situations where you need to run your model without a reliable Internet connection.
How to Deploy Computer Vision Models to Raspberry Pi with Docker

Pre-Requisites

You'll need a Raspberry Pi 4 (or Raspberry Pi 400) running the 64bit version of Ubuntu. To verify that you're running a compatible system, type arch into your Raspberry Pi's command line and verify that it outputs aarch64.
Then, open the terminal on the Raspberry Pi and install Docker using the convenience script:
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
Install Docker

Inference Server

The inference API is available as a Docker container optimized and configured for the Raspberry Pi. To install, simply pull the container:
sudo docker pull roboflow/inference-server:cpu
This will automatically detect your Pi's CPU and pull down the correct container.
Then run it:
sudo docker run --net=host roboflow/inference-server:cpu
You can now use your Pi as a drop-in replacement for the Hosted Inference API (see those docs for example code snippets in several programming languages).
Inference Server Installation: Success

Examples

To begin, install the Roboflow python package with pip install roboflow.
Installing the Roboflow Python Package and Activating the [Virtual] Environment (Visual Studio Code terminal)
Installing the Roboflow Python Package and Activating the [Virtual] Environment (Visual Studio Code terminal)
This project in the example URL is in a Public workspace. View it on Roboflow Universe.
  • YOUR_PRIVATE_API_KEY (type, string): Locating your Private API Key - Do not share your Private API Key. Revoke it and generate a new API Key if it is unintentionally exposed.
  • YOUR_WORKSPACE (type, string): roboflow-universe-projects is the workspace_id in the example URL
  • YOUR_PROJECT (type, string): construction-site-safety is project_id in the example URL
  • VERSION_NUMBER (type, integer): 25 is the version# in the example URL
Note: The first call to the model will take a few seconds to download and initialize your model weights; subsequent predictions will be much quicker.
Run inference on a single image:
First Inference Call: Download and Cache Model Weights
First Inference Call: Download and Cache Model Weights
from roboflow import Roboflow
rf = Roboflow(api_key="YOUR_PRIVATE_API_KEY")
project = rf.workspace("YOUR_WORKSPACE").project("YOUR_PROJECT")
model = project.version(VERSION_NUMBER, local="http://localhost:9001/").model
prediction = model.predict("YOUR_IMAGE.jpg", confidence=40, overlap=30)
## get predictions on hosted images
#prediction = model.predict("YOUR_IMAGE.jpg", hosted=True)
print(prediction.json())
Inference Result: One Image (Visual Studio Code terminal)
Inference Result: One Image (Visual Studio Code terminal)
To save results, with detections, append the following code after the line print(prediction.json()):
## save inferenced image
prediction.save("result.jpg")
## plot prediction result in on-screen window
#prediction.plot()
Run inference on multiple images and save them to a folder:
from roboflow import Roboflow
import os
import glob
rf = Roboflow(api_key="YOUR_PRIVATE_API_KEY")
project = rf.workspace("YOUR_WORKSPACE").project("YOUR_PROJECT")
model = project.version(VERSION_NUMBER, local="http://localhost:9001/").model
# update save_path if you'd like to save results to another location
save_path = os.curdir #saves results to the current working directory/folder
# path where the images are stored
raw_data_location = f"{os.curdir}"
for raw_data_extension in ['.jpg', '.jpeg', '.png']:
globbed_files = glob.glob(raw_data_location + '/*' + raw_data_extension)
for img_path in globbed_files:
prediction = model.predict(img_path, confidence=40, overlap=30)
print(prediction.json())
if os.path.exists(save_path) is False:
os.mkdir(save_path)
# saving result-images to the specified path in save_path
prediction.save("{save_path}/result_drive{img_path.split('/')[-1]}")
Inference Result: Multiple Images
Inference Result: Multiple Images
You can also run as a client-server context and send images to the Pi for inference from another machine on your network; simply replace localhost above with the Pi's local IP address.

Performance Expectations

On we saw about 1.3 frames per second on the Raspberry Pi 400. These results were obtained using while operating in a client-server context (so there is some minor network latency involved) and a 416x416 model.
If you need faster speeds you might want to try using the Luxonis OAK AI Cameras with your Raspberry Pi to accelerate your models.

Offline Mode

The weights for your model are downloaded each time the container runs. Full offline mode support (for autonomous and air-gapped devices) is available for enterprise deployments.