Raspberry Pi (On Device)
Deploy your Roboflow Train models to Raspberry Pi.
Our Hosted API is suitable for most use-cases; it uses battle-tested infrastructure and seamlessly autoscales up and down to handle even the most intense use-cases. But, because it is hosted remotely, there are some scenarios where it's not ideal. Our Raspberry Pi deployment option runs directly on your devices in situations where you need to run your model without a reliable Internet connection.
How to Deploy Computer Vision Models to Raspberry Pi with Docker
You'll need a Raspberry Pi 4 (or Raspberry Pi 400) running the 64bit version of Ubuntu. To verify that you're running a compatible system, type
arch
into your Raspberry Pi's command line and verify that it outputs aarch64
. curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh

Install Docker
The inference API is available as a Docker container optimized and configured for the Raspberry Pi. To install, simply pull the container:
sudo docker pull roboflow/inference-server:cpu
This will automatically detect your Pi's CPU and pull down the correct container.
Then run it:
sudo docker run --net=host roboflow/inference-server:cpu
You can now use your Pi as a drop-in replacement for the Hosted Inference API (see those docs for example code snippets in several programming languages).

Inference Server Installation: Success
![Installing the Roboflow Python Package and Activating the [Virtual] Environment (Visual Studio Code terminal)](https://2486075003-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-M6S9nPJhEX9FYH6clfW%2Fuploads%2F7dp0761bckkuKMN6LeGw%2Finstall_roboflow_python.png?alt=media&token=e99e10f6-d3a0-4365-b166-69dc9f991564)
Installing the Roboflow Python Package and Activating the [Virtual] Environment (Visual Studio Code terminal)
Example Project URL: https://app.roboflow.com/roboflow-universe-projects/construction-site-safety/25
YOUR_PRIVATE_API_KEY
(type, string): Locating your Private API Key - Do not share your Private API Key. Revoke it and generate a new API Key if it is unintentionally exposed.YOUR_WORKSPACE
(type, string):roboflow-universe-projects
is theworkspace_id
in the example URLYOUR_PROJECT
(type, string):construction-site-safety
isproject_id
in the example URLVERSION_NUMBER
(type, integer):25
is theversion#
in the example URL
Note: The first call to the model will take a few seconds to download and initialize your model weights; subsequent predictions will be much quicker.
Run inference on a single image:

First Inference Call: Download and Cache Model Weights
from roboflow import Roboflow
rf = Roboflow(api_key="YOUR_PRIVATE_API_KEY")
project = rf.workspace("YOUR_WORKSPACE").project("YOUR_PROJECT")
model = project.version(VERSION_NUMBER, local="http://localhost:9001/").model
prediction = model.predict("YOUR_IMAGE.jpg", confidence=40, overlap=30)
## get predictions on hosted images
#prediction = model.predict("YOUR_IMAGE.jpg", hosted=True)
print(prediction.json())

Inference Result: One Image (Visual Studio Code terminal)
To save results, with detections, append the following code after the line
print(prediction.json())
:## save inferenced image
prediction.save("result.jpg")
## plot prediction result in on-screen window
#prediction.plot()
Run inference on multiple images and save them to a folder:
from roboflow import Roboflow
import os
import glob
rf = Roboflow(api_key="YOUR_PRIVATE_API_KEY")
project = rf.workspace("YOUR_WORKSPACE").project("YOUR_PROJECT")
model = project.version(VERSION_NUMBER, local="http://localhost:9001/").model
# update save_path if you'd like to save results to another location
save_path = os.curdir #saves results to the current working directory/folder
# path where the images are stored
raw_data_location = f"{os.curdir}"
for raw_data_extension in ['.jpg', '.jpeg', '.png']:
globbed_files = glob.glob(raw_data_location + '/*' + raw_data_extension)
for img_path in globbed_files:
prediction = model.predict(img_path, confidence=40, overlap=30)
print(prediction.json())
if os.path.exists(save_path) is False:
os.mkdir(save_path)
# saving result-images to the specified path in save_path
prediction.save("{save_path}/result_drive{img_path.split('/')[-1]}")

Inference Result: Multiple Images
You can also run as a client-server context and send images to the Pi for inference from another machine on your network; simply replace
localhost
above with the Pi's local IP address.On we saw about 1.3 frames per second on the Raspberry Pi 400. These results were obtained using while operating in a client-server context (so there is some minor network latency involved) and a 416x416 model.
If you need faster speeds you might want to try using the Luxonis OAK AI Cameras with your Raspberry Pi to accelerate your models.
The weights for your model are downloaded each time the container runs. Full offline mode support (for autonomous and air-gapped devices) is available for enterprise deployments.
Last modified 3mo ago