Deploy your Roboflow Train models to Raspberry Pi.
Our Raspberry Pi deployment option runs directly on your devices in situations where you need to run your model without a reliable Internet connection.
The following task types are supported by the hosted API:
You will need a Raspberry Pi 4 (or Raspberry Pi 400) running the 64bit version of Ubuntu. To verify that you're running a compatible system, type
archinto your Raspberry Pi's command line and verify that it outputs
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
The inference API is available as a Docker container optimized and configured for the Raspberry Pi. You can install and run the inference server using the following command:
sudo docker run -it --rm -p 9001:9001 roboflow/roboflow-inference-server-arm-cpu
from roboflow import Roboflow
rf = Roboflow(api_key="YOUR_PRIVATE_API_KEY")
project = rf.workspace("YOUR_WORKSPACE").project("YOUR_PROJECT")
model = project.version(VERSION_NUMBER, local="http://localhost:9001/").model
prediction = model.predict("YOUR_IMAGE.jpg", confidence=40, overlap=30)
## get predictions on hosted images
#prediction = model.predict("YOUR_IMAGE.jpg", hosted=True)
Here is an example result of our inference on a model:
Inference Result: One Image (Visual Studio Code terminal)
You can also run as a client-server context and send images to the Pi for inference from another machine on your network. Replace
local=parameter with the Pi's local IP address.
On we saw about 1.3 frames per second on the Raspberry Pi 400. These results were obtained using while operating in a client-server context (so there is some minor network latency involved) and a 416x416 model.
Last modified 12d ago