Deploy a Workflow

You can deploy a Workflow in three ways:

  1. Send images to the Roboflow API for processing using your Workflow.

  2. Create a Roboflow Dedicated Deployment on infrastructure provisioned exclusively for your use.

  3. Run your Workflow on your own hardware using Roboflow Inference.

If you run your Workflow on your own hardware, you can run it on both images and video files and streams. The video streams supported for on device deployment are:

  • Webcams

  • RTSP streams

  • Video files

You can deploy Workflows on any system on which you can deploy Inference. This includes:

  • NVIDIA Jetson

  • AWS EC2, GCP Cloud Engine, and Azure Virtual Machines

  • Raspberry Pi

Roboflow Enterprise customers have access to additional video stream options, such as running inference on Basler cameras. To learn more about our offerings, contact the Roboflow sales team.

Deploy a Workflow

To deploy a workflow, click the "Run Workflow" button in the top left corner of the Workflows editor. All deployment options are documented on this page.

The code snippets in your Workflows editor will be pre-filled with your Workflows URL and API key.

To learn more about usage limits for Workflows, refer to the Roboflow pricing page.

Deploy On an Image (Roboflow Cloud)

You can run your Workflow on single images using the Roboflow API.

First, install the Roboflow Inference SDK:

pip install inference-sdk

Then, create a new Python file and add the following code:

from inference_sdk import InferenceHTTPClient

client = InferenceHTTPClient(
    api_url="https://detect.roboflow.com",
    api_key="API_KEY"
)

result = client.run_workflow(
    workspace_name="workspace-name",
    workflow_id="workflow-id",
    images={
        "image": "YOUR_IMAGE.jpg"
    }
)

Above, replace API_KEY with your Roboflow API key. Replace workspace-name and workflow-id with your Roboflow workspace name and Workflow IDs.

To find these values, open your Roboflow Workflow and click "Deploy Workflow". Then, copy your workspace name and workflow ID from the code snippet that appears on the page.

Deploy On an Image (On Device)

You can run your Workflow on single images on your own device.

This works on CPU and NVIDIA CUDA GPU devices. For the best performance, deploy on a GPU-enabled device such as an NVIDIA Jetson or a cloud server with an NVIDIA GPU.

First, install the Roboflow Inference CLI:

pip install inference inference-cli 

The installation process may take a few minutes.

Next, install Docker. Follow the official Docker installation instructions to install Docker on your machine.

Then, start an Inference server:

inference server start

Then, create a new Python file and add the following code:

from inference_sdk import InferenceHTTPClient

client = InferenceHTTPClient(
    api_url="http://127.0.0.1:9001", # use local inference server
    api_key="API_KEY"
)

result = client.run_workflow(
    workspace_name="workspace-name",
    workflow_id="workflow-id",
    images={
        "image": "YOUR_IMAGE.jpg"
    }
)

Above, replace API_KEY with your Roboflow API key. Replace workspace-name and workflow-id with your Roboflow workspace name and Workflow IDs.

To find these values, open your Roboflow Workflow and click "Deploy Workflow". Then, copy your workspace name and workflow ID from the code snippet that appears on the page.

Deploy with a Video Stream (RTSP, Webcam)

You can deploy your Workflow on frames from a video stream. This can be a webcam or an RTSP stream. You can also run your Workflow on video files.

First, install Inference:

pip install inference

It may take a few minutes for Inference to install.

Then, create a new Python file and add the following code:

# Import the InferencePipeline object
from inference import InferencePipeline

def my_sink(result, video_frame):
    print(result) # do something with the predictions of each frame
    

# initialize a pipeline object
pipeline = InferencePipeline.init_with_workflow(
    api_key="API_KEY",
    workspace_name="workspace-name",
    workflow_id="workflow-id",
    video_reference=0, # Path to video, RSTP stream, device id (int, usually 0 for built in webcams), or RTSP stream url
    on_prediction=my_sink
)
pipeline.start() #start the pipeline
pipeline.join() #wait for the pipeline thread to finish

Above, replace API_KEY with your Roboflow API key. Replace workspace-name and workflow-id with your Roboflow workspace name and Workflow IDs.

To find these values, open your Roboflow Workflow and click "Deploy Workflow". Then, copy your workspace name and workflow ID from the code snippet that appears on the page.

When you run the code above, your Workflow will run on your video or video stream.

Last updated