Raspberry Pi (Legacy)
Deploy your Roboflow Train models to Raspberry Pi.
Last updated
Was this helpful?
Deploy your Roboflow Train models to Raspberry Pi.
Last updated
Was this helpful?
The information previously on this page is no longer available. You can find Robofolow Deployment documentation at:
Roboflow Inference Documentation at
Our Raspberry Pi deployment option runs directly on your devices in situations where you need to run your model without a reliable Internet connection.
The following task types are supported by the hosted API:
Object Detection
Classification
Instance Segmentation
Semantic Segmentation
You will need a Raspberry Pi 4 (or Raspberry Pi 400) . To verify that you're running a compatible system, type arch
into your Raspberry Pi's command line and verify that it outputs aarch64
.
Then, open the terminal on the Raspberry Pi and install Docker :
The inference API is available as a Docker container optimized and configured for the Raspberry Pi. You can install and run the inference server using the following command:
Next, install the Roboflow python package with pip install roboflow
.
Here is an example result of our inference on a model:
You can also run as a client-server context and send images to the Pi for inference from another machine on your network. Replace localhost
in the local=
parameter with the Pi's local IP address.
We observed about 1.3 frames per second on the Raspberry Pi 400. These results were obtained while operating in a client-server context (so there is some minor network latency involved) and a 416x416 model.
You can now use your Pi as a drop-in replacement for the (see those docs for example code snippets in several programming languages).
To run inference on your model, run the following code, substituting your API key, workspace and project IDs, project version, and image name as relevant. You can learn how to find your API key in our and how to find your .