Roboflow Docs
DashboardResourcesProducts
  • Product Documentation
  • Developer Reference
  • Changelog
  • Roboflow Documentation
  • Quickstart
  • Workspaces
    • Workspaces, Projects, and Models
    • Create a Workspace
    • Rename a Workspace
    • Delete a Workspace
  • Team Members
    • Invite a Team Member
    • Role-Based Access Control (RBAC)
    • Change a Team Member Role
    • Remove a Team Member
  • Single Sign On (SSO)
  • Workflows
    • What is Workflows?
    • Create a Workflow
    • Build a Workflow
    • Test a Workflow
    • Deploy a Workflow
    • Workflow Examples
      • Multimodal Model Workflow
    • Share a Workflow
    • Workflows AI Assistant
  • Enterprise Integrations
  • Workflow Blocks
    • Run a Model
      • Object Detection Model
      • Single-Label Classification Model
    • Visualize Predictions
      • Bounding Box Visualization
      • Label Visualization
      • Circle Visualization
      • Background Color Visualization
      • Classification Label Visualization
      • Crop Visualization
  • Dataset Management
    • Create a Project
    • Upload Images, Videos, and Annotations
      • Import Data from Cloud Providers
        • AWS S3 Bucket
        • Azure Blob Storage
        • Google Cloud Storage
      • Import from Roboflow Universe
    • Manage Datasets
      • Dataset Batches
      • Search a Dataset
      • Set Dataset Classes
      • Add Tags to Images
      • Create an Annotation Attribute
      • Download an Image
      • Delete an Image
    • Dataset Versions
      • Create a Dataset Version
      • Preprocess Images
      • Image Augmentation
        • Add Augmentations to Images
      • Delete a Version
    • Dataset Analytics
    • Merge Projects
    • Rename a Project
    • Delete a Project
    • Project Folders
    • Make a Project Public
    • Download a Dataset
  • Annotate
    • Introduction to Roboflow Annotate
    • Annotate an Image
      • Keyboard Shortcuts
      • Comment on an Image
      • Annotation History
      • Similarity Search
    • AI Labeling
      • Label Assist
      • Enhanced Smart Polygon with SAM
        • Smart Polygon (Legacy)
      • Box Prompting
      • Auto Label
    • Set Keypoint Skeletons
    • Annotate Keypoints
    • Annotate Multimodal Data
    • Collaborate on Labeling
    • Annotation Insights
  • Roboflow Labeling Services
  • Train
    • Train a Model
      • Train from a Universe Checkpoint
      • Train from Azure Vision
      • Train from Google Cloud
    • Roboflow Instant
    • Cancel a Training Job
    • Stop Training Early
    • View Training Results
    • View Trained Models
    • Evaluate Trained Models
  • Deploy
    • Deploy a Model or Workflow
    • Supported Models
    • Managed Deployments
    • Serverless Hosted API V2
      • Use in a Workflow
      • Use with the REST API
      • Run an Instant Model
    • Serverless Hosted API
      • Object Detection
      • Classification
      • Instance Segmentation
        • Semantic Segmentation
      • Keypoint Detection
      • Foundation Models
        • CLIP
        • OCR
        • YOLO-World
      • Video Inference
        • Use a Fine-Tuned Model
        • Use CLIP
        • Use Gaze Detection
        • API Reference
        • Video Inference JSON Output Format
      • Pre-Trained Model APIs
        • Blur People API
        • OCR API
        • Logistics API
        • Image Tagging API
        • People Detection API
        • Fish Detection API
        • Bird Detection API
        • PPE Detection API
        • Barcode Detection API
        • License Plate Detection API
        • Ceramic Defect Detection API
        • Metal Defect Detection API
    • Dedicated Deployments
      • Create a Dedicated Deployment
      • Make Requests to a Dedicated Deployment
      • Manage Dedicated Deployments with an API
    • Batch Processing
    • SDKs
      • Python inference-sdk
      • Web Browser
        • inferencejs Reference
        • inferencejs Requirements
      • Lens Studio
        • Changelog - Lens Studio
      • Luxonis OAK
    • Upload Custom Model Weights
    • Download Model Weights
    • Enterprise Deployment
      • License Server
      • Offline Mode
      • Kubernetes
      • Docker Compose
    • Model Monitoring
      • Alerting
  • Universe
    • What is Roboflow Universe?
    • Find a Dataset on Universe
    • Explore Images in a Universe Dataset
    • Fork a Universe Dataset
    • Find a Model on Universe
    • Download a Universe Dataset
  • Set a Project Description
  • View Project Analytics
  • Support
    • Share a Workspace with Support
    • Delete Your Roboflow Account
    • Apply for Academic Credits
  • Billing
    • Premium Trial
    • Credits
      • View Credit Usage
      • Enable or Disable Flex Billing
      • Purchase Prepaid Credits
    • Plans
      • Purchase a Plan
      • Cancel a Plan
      • Update Billing Details
      • Update Payment Method
      • View Invoices
Powered by GitBook
On this page
  • Task Support
  • Installation
  • Step #1: Flash Jetson Device
  • Step #2: Run Docker Container
  • Step #3: Use the Server
  • Expected Performance

Was this helpful?

  1. Deploy
  2. Legacy Documentation

NVIDIA Jetson (Legacy)

Deploy your Roboflow model on the edge to the NVIDIA Jetson

Last updated 2 months ago

Was this helpful?

This is the legacy (outdated) version of this page. See the updated page .

Prefer to learn using video? Check out our .

The is a drop-in replacement for the I that can be deployed on your own hardware. We have optimized it to get maximum performance from the NVIDIA Jetson line of edge-AI devices by specifically tailoring the drivers, libraries, and binaries specifically to its CPU and GPU architectures.

Task Support

The following task types are supported by the hosted API:

Task Type
Supported by NVIDIA Jetson

Object Detection

Classification

Instance Segmentation

Semantic Segmentation

Installation

You can take the edge acceleration version of your model to the NVIDIA Jetson, where you may need realtime speeds with limited hardware resources.

Step #1: Flash Jetson Device

Ensure that your Jetson is flashed with Jetpack 4.5, 4.6, or 5.1. You can check you existing with this repository from Jetson Hacks

git clone https://github.com/jetsonhacks/jetsonUtilities.git
cd jetsonUtilities
python jetsonInfo.py

Step #2: Run Docker Container

Next, run the Roboflow Inference Server using the accompanying Docker container:

sudo docker run --privileged --net=host --runtime=nvidia --mount source=roboflow,target=/tmp/cache -e NUM_WORKERS=1 roboflow/roboflow-inference-server-jetson-4.5.0:latest

The docker image you need depends on what Jetpack version you are using.

  • Jetpack 4.5: roboflow/roboflow-inference-server-jetson-4.5.0

  • Jetpack 4.6: roboflow/roboflow-inference-server-jetson-4.6.1

  • Jetpack 5.1: roboflow/roboflow-inference-server-jetson-5.1.1

The Jetson images default to using a CUDA execution provider. To use TensorRT, set the environment variable ONNXRUNTIME_EXECUTION_PROVIDERS=TensorrtExecutionProvider. Note, while using TensorRT can increase performance, it also incurs an additional startup compilation cost.

Step #3: Use the Server

You can now use the server to run inference on any of your models. The following command shows the syntax for making a request to the inference API via curl:

base64 your_img.jpg | curl -d @- "http://localhost:9001/[YOUR MODEL]/[YOUR VERSION]?api_key=[YOUR API KEY]"

When you send a request for the first time, your model will compile on your Jetson device for 5-10 minutes.

Expected Performance

There are many factors that affect the performance of a particular inference pipeline including model size, input image size, model input size, confidence threshold, etc. For those looking for a rough estimate of performance, we provide the benchmarks below:

Config:

Model Type: Roboflow 3.0 Fast

Model Input Resolution: 640 x 640

Input Image Size: 1024 x 1024

Hardware: Jetson Orin Nano running Jetpack 5.1.1

Performance:

More benchmarks for varying configurations coming soon!

Python Script via : 30 FPS

HTTP Requests to : 15FPS

here
NVIDIA Jetson deployment guide video
Roboflow Inference server
Hosted Inference AP
pip install inference
roboflow/roboflow-inference-server-jetson-5.1.1:0.9.1
✅
✅
✅
✅