Roboflow Docs
DashboardResourcesProducts
  • Product Documentation
  • Developer Reference
  • Changelog
  • Roboflow Documentation
  • Quickstart
  • Workspaces
    • Workspaces, Projects, and Models
    • Create a Workspace
    • Rename a Workspace
    • Delete a Workspace
  • Team Members
    • Invite a Team Member
    • Role-Based Access Control (RBAC)
    • Change a Team Member Role
    • Remove a Team Member
  • Single Sign On (SSO)
  • Workflows
    • What is Workflows?
    • Create a Workflow
    • Build a Workflow
    • Test a Workflow
    • Deploy a Workflow
    • Workflow Examples
      • Multimodal Model Workflow
    • Share a Workflow
    • Workflows AI Assistant
  • Enterprise Integrations
  • Workflow Blocks
    • Run a Model
      • Object Detection Model
      • Single-Label Classification Model
    • Visualize Predictions
      • Bounding Box Visualization
      • Label Visualization
      • Circle Visualization
      • Background Color Visualization
      • Classification Label Visualization
      • Crop Visualization
  • Dataset Management
    • Create a Project
    • Upload Images, Videos, and Annotations
      • Import Data from Cloud Providers
        • AWS S3 Bucket
        • Azure Blob Storage
        • Google Cloud Storage
      • Import from Roboflow Universe
    • Manage Datasets
      • Dataset Batches
      • Search a Dataset
      • Set Dataset Classes
      • Add Tags to Images
      • Create an Annotation Attribute
      • Download an Image
      • Delete an Image
    • Dataset Versions
      • Create a Dataset Version
      • Preprocess Images
      • Augment Images
      • Delete a Version
    • Dataset Analytics
    • Merge Projects
    • Rename a Project
    • Delete a Project
    • Project Folders
    • Make a Project Public
    • Download a Dataset
  • Annotate
    • Introduction to Roboflow Annotate
    • Annotate an Image
      • Keyboard Shortcuts
      • Comment on an Image
      • Annotation History
      • Similarity Search
    • AI Labeling
      • Label Assist
      • Enhanced Smart Polygon with SAM
        • Smart Polygon (Legacy)
      • Box Prompting
      • Auto Label
    • Set Keypoint Skeletons
    • Annotate Keypoints
    • Annotate Multimodal Data
    • Collaborate on Labeling
    • Annotation Insights
  • Managed Labeling
  • Train
    • Train a Model
      • Train from a Universe Checkpoint
      • Train from Azure Vision
      • Train from Google Cloud
    • Roboflow Instant
    • Cancel a Training Job
    • Stop Training Early
    • View Training Results
    • View Trained Models
    • Evaluate Trained Models
  • Deploy
    • Deploy a Model or Workflow
    • Supported Models
    • Managed Deployments
    • Serverless Hosted API V2
      • Use in a Workflow
      • Use with the REST API
      • Run an Instant Model
    • Serverless Hosted API
      • Object Detection
      • Classification
      • Instance Segmentation
        • Semantic Segmentation
      • Keypoint Detection
      • Foundation Models
        • CLIP
        • OCR
        • YOLO-World
      • Video Inference
        • Use a Fine-Tuned Model
        • Use CLIP
        • Use Gaze Detection
        • API Reference
        • Video Inference JSON Output Format
      • Pre-Trained Model APIs
        • Blur People API
        • OCR API
        • Logistics API
        • Image Tagging API
        • People Detection API
        • Fish Detection API
        • Bird Detection API
        • PPE Detection API
        • Barcode Detection API
        • License Plate Detection API
        • Ceramic Defect Detection API
        • Metal Defect Detection API
    • Dedicated Deployments
      • Create a Dedicated Deployment
      • Make Requests to a Dedicated Deployment
      • Manage Dedicated Deployments with an API
    • Batch Processing
    • SDKs
      • Python inference-sdk
      • Web Browser
        • inferencejs Reference
        • inferencejs Requirements
      • Lens Studio
        • Changelog - Lens Studio
      • Luxonis OAK
    • Upload Custom Model Weights
    • Download Model Weights
    • Enterprise Deployment
      • License Server
      • Offline Mode
      • Kubernetes
      • Docker Compose
    • Model Monitoring
      • Alerting
  • Universe
    • What is Roboflow Universe?
    • Find a Dataset on Universe
    • Explore Images in a Universe Dataset
    • Fork a Universe Dataset
    • Find a Model on Universe
    • Download a Universe Dataset
  • Set a Project Description
  • View Project Analytics
  • Support
    • Share a Workspace with Support
    • Delete Your Roboflow Account
    • Apply for Academic Credits
  • Billing
    • Premium Trial
    • Credits
      • View Credit Usage
      • Enable or Disable Flex Billing
      • Purchase Prepaid Credits
    • Plans
      • Purchase a Plan
      • Cancel a Plan
      • Update Billing Details
      • Update Payment Method
      • View Invoices
Powered by GitBook
On this page

Was this helpful?

  1. Workflows

Deploy a Workflow

PreviousTest a WorkflowNextWorkflow Examples

Last updated 2 months ago

Was this helpful?

You can deploy a Workflow in four ways:

  1. Send images to the for processing using your Workflow.

  2. Create a on infrastructure provisioned exclusively for your use.

  3. Run your Workflow on your own hardware using .

  4. Schedule a to automate the processing of large amounts of data without coding.

If you run your Workflow on your own hardware, you can run it on both images and video files (including streams from regular webcams and professional CCTV cameras).

By choosing on-premises deployment, you can run Workflows on any system where you can deploy Inference. This includes:

  • NVIDIA Jetson

  • AWS EC2, GCP Cloud Engine, and Azure Virtual Machines

  • Raspberry Pi

Roboflow Enterprise customers have access to additional video stream options, such as running inference on Basler cameras. To learn more about our offerings, .

Deploy a Workflow

To deploy a workflow, click the "Deploy" button in the top left corner of the Workflows editor. All deployment options are documented on this page.

The code snippets in your Workflows editor will be pre-filled with your Workflows URL and API key.

To learn more about usage limits for Workflows, refer to the .

Process Images

You can run your Workflow on single images using the Roboflow API or local Inference server.

First, install the Roboflow Inference SDK:

pip install inference-sdk inference-cli 
inference server start

Then, create a new Python file and add the following code:

from inference_sdk import InferenceHTTPClient

client = InferenceHTTPClient(
    api_url="https://detect.roboflow.com",  # or "http://127.0.0.1:9001" for local deployment
    api_key="API_KEY"
)

result = client.run_workflow(
    workspace_name="workspace-name",
    workflow_id="workflow-id",
    images={
        "image": "YOUR_IMAGE.jpg"
    }
)

Above, replace API_KEY with your Roboflow API key. Replace workspace-name and workflow-id with your Roboflow workspace name and Workflow IDs.

To find these values, open your Roboflow Workflow and click "Deploy Workflow". Then, copy your workspace name and workflow ID from the code snippet that appears on the page.

Local execution works on CPU and NVIDIA CUDA GPU devices. For the best performance, deploy on a GPU-enabled device such as an NVIDIA Jetson or a cloud server with an NVIDIA GPU.

Process Video Stream (RTSP, Webcam)

You can deploy your Workflow on frames from a video stream. This can be a webcam or an RTSP stream. You can also run your Workflow on video files.

First, install Inference:

pip install inference  # or inference-gpu for GPU machines

It may take a few minutes for Inference to install.

Then, create a new Python file and add the following code:

# Import the InferencePipeline object
from inference import InferencePipeline

def my_sink(result, video_frame):
    print(result) # do something with the predictions of each frame
    

# initialize a pipeline object
pipeline = InferencePipeline.init_with_workflow(
    api_key="API_KEY",
    workspace_name="workspace-name",
    workflow_id="workflow-id",
    video_reference=0, # Path to video, RSTP stream, device id (int, usually 0 for built in webcams), or RTSP stream url
    on_prediction=my_sink
)
pipeline.start() #start the pipeline
pipeline.join() #wait for the pipeline thread to finish

Above, replace API_KEY with your Roboflow API key. Replace workspace-name and workflow-id with your Roboflow workspace name and Workflow IDs.

To find these values, open your Roboflow Workflow and click "Deploy Workflow". Then, copy your workspace name and workflow ID from the code snippet that appears on the page.

When you run the code above, your Workflow will run on your video or video stream.

Process Batches of Data

You can efficiently process entire batches of data—directories of images and video files—using the Roboflow Batch Processing service. This fully managed solution requires no coding or local computation. Simply select your data and Workflow, and let Roboflow handle the rest.

To run the processing, install Inference CLI:

pip install inference-cli

Then you can ingest your data:

inference rf-cloud data-staging create-batch-of-images \
    --images-dir <your-images-dir-path> \
    --batch-id <your-batch-id>

When data are loaded, start the processing job:

inference rf-cloud batch-processing process-images-with-workflow \
    --workflow-id <workflow-id> \
    --batch-id <batch-id>

Progress of the job can be displayed using:

inference rf-cloud batch-processing show-job-details \
    --job-id <your-job-id>  # job-id will be displayed when you create a job

And when the job is done, export the results:

inference rf-cloud data-staging export-batch \
    --target-dir <dir-to-export-result> \
    --batch-id <output-batch-of-a-job>

If you run locally, follow the to install Docker on your machine and start Inference server:

We support both UI, CLI and REST API interactions with Batch Processing. Below, we present CLI commands. Discover .

Roboflow API
Roboflow Dedicated Deployment
Roboflow Inference
batch job in Roboflow Cloud
contact the Roboflow sales team
Roboflow pricing page
official Docker installation instructions
all options