Deploy a Workflow
Last updated
Was this helpful?
Last updated
Was this helpful?
You can deploy a Workflow in four ways:
Send images to the for processing using your Workflow.
Create a on infrastructure provisioned exclusively for your use.
Run your Workflow on your own hardware using .
Schedule a to automate the processing of large amounts of data without coding.
If you run your Workflow on your own hardware, you can run it on both images and video files (including streams from regular webcams and professional CCTV cameras).
By choosing on-premises deployment, you can run Workflows on any system where you can deploy Inference. This includes:
NVIDIA Jetson
AWS EC2, GCP Cloud Engine, and Azure Virtual Machines
Raspberry Pi
To deploy a workflow, click the "Deploy" button in the top left corner of the Workflows editor. All deployment options are documented on this page.
The code snippets in your Workflows editor will be pre-filled with your Workflows URL and API key.
You can run your Workflow on single images using the Roboflow API or local Inference server.
First, install the Roboflow Inference SDK:
Then, create a new Python file and add the following code:
Above, replace API_KEY
with your Roboflow API key. Replace workspace-name
and workflow-id
with your Roboflow workspace name and Workflow IDs.
To find these values, open your Roboflow Workflow and click "Deploy Workflow". Then, copy your workspace name and workflow ID from the code snippet that appears on the page.
Local execution works on CPU and NVIDIA CUDA GPU devices. For the best performance, deploy on a GPU-enabled device such as an NVIDIA Jetson or a cloud server with an NVIDIA GPU.
You can deploy your Workflow on frames from a video stream. This can be a webcam or an RTSP stream. You can also run your Workflow on video files.
First, install Inference:
It may take a few minutes for Inference to install.
Then, create a new Python file and add the following code:
Above, replace API_KEY
with your Roboflow API key. Replace workspace-name
and workflow-id
with your Roboflow workspace name and Workflow IDs.
To find these values, open your Roboflow Workflow and click "Deploy Workflow". Then, copy your workspace name and workflow ID from the code snippet that appears on the page.
When you run the code above, your Workflow will run on your video or video stream.
You can efficiently process entire batches of data—directories of images and video files—using the Roboflow Batch Processing service. This fully managed solution requires no coding or local computation. Simply select your data and Workflow, and let Roboflow handle the rest.
To run the processing, install Inference CLI:
Then you can ingest your data:
When data are loaded, start the processing job:
Progress of the job can be displayed using:
And when the job is done, export the results:
If you run locally, follow the to install Docker on your machine and start Inference server:
We support both UI, CLI and REST API interactions with Batch Processing. Below, we present CLI commands. Discover .