Roboflow Docs
DashboardForum
  • Build Vision Models with Roboflow
  • Quickstart
  • Roboflow Enterprise
  • Workspaces
    • Create a Workspace
    • Delete a Workspace
    • Add Team Members
    • Role-Based Access Control
  • Usage Based Pricing
  • Workflows
    • Create a Workflow
    • Build a Workflow
    • Test a Workflow
    • Deploy a Workflow
    • Workflow Examples
      • Multimodal Model Workflow
    • Share a Workflow
      • Workflow Sharing Configuration
    • Advance Workflow Topics
      • JSON Editor
  • Datasets
    • Create a Project
    • Upload Data
      • Import Data from Cloud Providers
        • AWS S3 Bucket
        • Azure Blob Storage
        • Google Cloud Storage
      • Upload Video
      • Import from Roboflow Universe
    • Manage Batches
    • Search a Dataset
    • Create a Dataset Version
    • Preprocess Images
    • Create Augmented Images
    • Add Tags to Images
    • Manage Classes
    • Edit Keypoint Skeletons
    • Create an Annotation Attribute
    • Export Versions
    • Dataset Analytics
    • Merge Projects
    • Delete an Image
    • Delete a Version
    • Delete a Project
    • Project Folders
  • Annotate
    • Annotation Tools
    • Use Roboflow Annotate
      • Annotate Keypoints
      • Label Assist (AI Labeling)
      • Enhanced Smart Polygon with SAM (AI Labeling)
      • Smart Polygon (AI Labeling)
      • Keyboard Shortcuts
      • Comment on an Image
      • Annotation History
      • Similarity Search
      • Box Prompting (AI Labeling)
    • Automated Annotation with Auto Label
    • Collaborate on Annotations
    • Annotation Insights
    • Labeling Best Practices
  • Train
    • Train a Model in Roboflow
      • Train from Scratch
      • Train from a Universe Checkpoint
      • Python Package
      • Roboflow Notebooks (GitHub)
    • Train from Azure Vision
    • Train from Google Cloud
    • View Training Results
    • Evaluate Trained Models
    • Custom Training Notebooks
  • Deploy
    • Deployment Overview
      • Roboflow Managed Deployments Overview
    • Serverless Hosted API
      • Object Detection
      • Classification
      • Instance Segmentation
        • Semantic Segmentation
      • Keypoint Detection
      • Foundation Models
        • CLIP
        • OCR
        • YOLO-World
      • Video Inference
        • Use a Fine-Tuned Model
        • Use CLIP
        • Use Gaze Detection
        • API Reference
        • Video Inference JSON Output Format
      • Pre-Trained Model APIs
        • Blur People API
        • OCR API
        • Logistics API
        • Image Tagging API
        • People Detection API
        • Fish Detection API
        • Bird Detection API
        • PPE Detection API
        • Barcode Detection API
        • License Plate Detection API
        • Ceramic Defect Detection API
        • Metal Defect Detection API
    • Serverless Hosted API V2
    • Dedicated Deployments
      • How to create a dedicated deployment (Roboflow App)
      • How to create a dedicated deployment (Roboflow CLI)
      • How to use a dedicated deployment
      • How to manage dedicated deployment using HTTP APIs
    • SDKs
      • Python inference-sdk
      • Web Browser
        • inferencejs Reference
        • inferencejs Requirements
      • Lens Studio
        • Changelog - Lens Studio
      • Mobile iOS
      • Luxonis OAK
    • Upload Custom Weights
    • Download Roboflow Model Weights
    • Enterprise Deployment
      • License Server
      • Offline Mode
      • Kubernetes
      • Docker Compose
    • Model Monitoring
      • Alerting
  • Roboflow CLI
    • Introduction
    • Installation and Authentication
    • Getting Help
    • Upload Dataset
    • Download Dataset
    • Run Inference
  • API Reference
    • Introduction
    • Python Package
    • REST API Structure
    • Authentication
    • Workspace and Project IDs
    • Workspaces
    • Workspace Image Query
    • Batches
    • Annotation Jobs
    • Projects
      • Initialize
      • Create
      • Project Folders API
    • Images
      • Upload Images
      • Image Details
      • Upload Dataset
      • Upload an Annotation
      • Search
      • Tags
    • Versions
      • View a Version
      • Create a Project Version
    • Inference
    • Export Data
    • Train a Model
    • Annotation Insights
      • Annotation Insights (Legacy Endpoint)
    • Model Monitoring
      • Custom Metadata
      • Inference Result Stats
  • Support
    • Share a Workspace with Support
    • Account Deletion
    • Frequently Asked Questions
Powered by GitBook
On this page
  • Task Support
  • Deploy a Model to a Raspberry Pi
  • Step 1: Install the Inference Server
  • Step 2: Install the Roboflow pip Package
  • Step 3: Run Inference
  • Performance Expectations
  • See Also

Was this helpful?

  1. Deploy
  2. Legacy Documentation

Raspberry Pi (Legacy)

Deploy your Roboflow Train models to Raspberry Pi.

Last updated 1 month ago

Was this helpful?

The information previously on this page is no longer available. You can find Robofolow Deployment documentation at:

  • Roboflow Inference Documentation at

Prefer to learn using video? Check out our .

Our Raspberry Pi deployment option runs directly on your devices in situations where you need to run your model without a reliable Internet connection.

Task Support

The following task types are supported by the hosted API:

Task Type
Supported by Hosted API

Object Detection

Classification

Instance Segmentation

Semantic Segmentation

Deploy a Model to a Raspberry Pi

You will need a Raspberry Pi 4 (or Raspberry Pi 400) . To verify that you're running a compatible system, type arch into your Raspberry Pi's command line and verify that it outputs aarch64.

Then, open the terminal on the Raspberry Pi and install Docker :

 curl -fsSL https://get.docker.com -o get-docker.sh
 sudo sh get-docker.sh

Step 1: Install the Inference Server

The inference API is available as a Docker container optimized and configured for the Raspberry Pi. You can install and run the inference server using the following command:

sudo docker run -it --rm -p 9001:9001 roboflow/roboflow-inference-server-arm-cpu

Step 2: Install the Roboflow pip Package

Next, install the Roboflow python package with pip install roboflow.

Step 3: Run Inference

from roboflow import Roboflow

rf = Roboflow(api_key="YOUR_PRIVATE_API_KEY")
project = rf.workspace("YOUR_WORKSPACE").project("YOUR_PROJECT")
model = project.version(VERSION_NUMBER, local="http://localhost:9001/").model

prediction = model.predict("YOUR_IMAGE.jpg", confidence=40, overlap=30)
## get predictions on hosted images
#prediction = model.predict("YOUR_IMAGE.jpg", hosted=True)
print(prediction.json())

Here is an example result of our inference on a model:

You can also run as a client-server context and send images to the Pi for inference from another machine on your network. Replace localhost in the local= parameter with the Pi's local IP address.

Performance Expectations

We observed about 1.3 frames per second on the Raspberry Pi 400. These results were obtained while operating in a client-server context (so there is some minor network latency involved) and a 416x416 model.

See Also

You can now use your Pi as a drop-in replacement for the (see those docs for example code snippets in several programming languages).

To run inference on your model, run the following code, substituting your API key, workspace and project IDs, project version, and image name as relevant. You can learn how to find your API key in our and how to find your .

Deployment Overview
https://inference.roboflow.com
Raspberry Pi video guide
running the 64bit version of Ubuntu
using the convenience script
Hosted Inference API
API docs
workspace and project ID
Raspberry Pi Video Tutorial
✅
✅
✅
✅
Inference Result: One Image (Visual Studio Code terminal)
Inference Result: One Image (Visual Studio Code terminal)