Roboflow Docs
DashboardForum
  • Build Vision Models with Roboflow
  • Quickstart
  • Roboflow Enterprise
  • Workspaces
    • Create a Workspace
    • Delete a Workspace
    • Add Team Members
    • Role-Based Access Control
  • Usage Based Pricing
  • Workflows
    • Create a Workflow
    • Build a Workflow
    • Test a Workflow
    • Deploy a Workflow
    • Workflow Examples
      • Multimodal Model Workflow
    • Share a Workflow
      • Workflow Sharing Configuration
    • Advance Workflow Topics
      • JSON Editor
  • Datasets
    • Create a Project
    • Upload Data
      • Import Data from Cloud Providers
        • AWS S3 Bucket
        • Azure Blob Storage
        • Google Cloud Storage
      • Upload Video
      • Import from Roboflow Universe
    • Manage Batches
    • Search a Dataset
    • Create a Dataset Version
    • Preprocess Images
    • Create Augmented Images
    • Add Tags to Images
    • Manage Classes
    • Edit Keypoint Skeletons
    • Create an Annotation Attribute
    • Export Versions
    • Dataset Analytics
    • Merge Projects
    • Delete an Image
    • Delete a Version
    • Delete a Project
    • Project Folders
  • Annotate
    • Annotation Tools
    • Use Roboflow Annotate
      • Annotate Keypoints
      • Label Assist (AI Labeling)
      • Enhanced Smart Polygon with SAM (AI Labeling)
      • Smart Polygon (AI Labeling)
      • Keyboard Shortcuts
      • Comment on an Image
      • Annotation History
      • Similarity Search
      • Box Prompting (AI Labeling)
    • Automated Annotation with Auto Label
    • Collaborate on Annotations
    • Annotation Insights
    • Labeling Best Practices
  • Train
    • Train a Model in Roboflow
      • Train from Scratch
      • Train from a Universe Checkpoint
      • Python Package
      • Roboflow Notebooks (GitHub)
    • Train from Azure Vision
    • Train from Google Cloud
    • View Training Results
    • Evaluate Trained Models
    • Custom Training Notebooks
  • Deploy
    • Deployment Overview
      • Roboflow Managed Deployments Overview
    • Serverless Hosted API
      • Object Detection
      • Classification
      • Instance Segmentation
        • Semantic Segmentation
      • Keypoint Detection
      • Foundation Models
        • CLIP
        • OCR
        • YOLO-World
      • Video Inference
        • Use a Fine-Tuned Model
        • Use CLIP
        • Use Gaze Detection
        • API Reference
        • Video Inference JSON Output Format
      • Pre-Trained Model APIs
        • Blur People API
        • OCR API
        • Logistics API
        • Image Tagging API
        • People Detection API
        • Fish Detection API
        • Bird Detection API
        • PPE Detection API
        • Barcode Detection API
        • License Plate Detection API
        • Ceramic Defect Detection API
        • Metal Defect Detection API
    • Serverless Hosted API V2
    • Dedicated Deployments
      • How to create a dedicated deployment (Roboflow App)
      • How to create a dedicated deployment (Roboflow CLI)
      • How to use a dedicated deployment
      • How to manage dedicated deployment using HTTP APIs
    • SDKs
      • Python inference-sdk
      • Web Browser
        • inferencejs Reference
        • inferencejs Requirements
      • Lens Studio
        • Changelog - Lens Studio
      • Mobile iOS
      • Luxonis OAK
    • Upload Custom Weights
    • Download Roboflow Model Weights
    • Enterprise Deployment
      • License Server
      • Offline Mode
      • Kubernetes
      • Docker Compose
    • Model Monitoring
      • Alerting
  • Roboflow CLI
    • Introduction
    • Installation and Authentication
    • Getting Help
    • Upload Dataset
    • Download Dataset
    • Run Inference
  • API Reference
    • Introduction
    • Python Package
    • REST API Structure
    • Authentication
    • Workspace and Project IDs
    • Workspaces
    • Workspace Image Query
    • Batches
    • Annotation Jobs
    • Projects
      • Initialize
      • Create
      • Project Folders API
    • Images
      • Upload Images
      • Image Details
      • Upload Dataset
      • Upload an Annotation
      • Search
      • Tags
    • Versions
      • View a Version
      • Create a Project Version
    • Inference
    • Export Data
    • Train a Model
    • Annotation Insights
      • Annotation Insights (Legacy Endpoint)
    • Model Monitoring
      • Custom Metadata
      • Inference Result Stats
  • Support
    • Share a Workspace with Support
    • Account Deletion
    • Frequently Asked Questions
Powered by GitBook
On this page
  • Overview
  • Block Connections
  • Building a Workflow
  • Save Changes

Was this helpful?

  1. Workflows

Build a Workflow

PreviousCreate a WorkflowNextTest a Workflow

Last updated 2 months ago

Was this helpful?

A workflow is made up of blocks, which perform specific tasks, such running model inference, performing logic, or interfacing with external services.

For a deeper dive on the list of available block, view our .

Overview

This guide will go over creating a four block workflow to run an object detection model, count predictions, and visualize the model results. Here’s the to follow along.

Block Connections

Before we start building, it's important to understand how block connections work.

To add a block in a location, it has to use the previous block as an input. For example, in the workflow shown above, the Property Definition block comes after the Object Detection block since it uses the model block as an input. The Bounding Box Visualization block is to the right, since it doesn't use the output of the Property Definition block, but does reference the model output.

In the example workflow above, we have four distinct pathways, since each branch executes in parallel at runtime, and doesn't rely on the other branch blocks as inputs.

Building a Workflow

Object Detection Model

The object detection block has a required image parameter that determines what the model is inferring on. There are several optional parameters, the core ones are described in detail below:

  • Class Filter: List of classes that the model will return. Note: the model will always only return classes it’s trained on, this allows you to filter out unneeded classes.

  • Confidence: objects below that confidence will not be returned.

  • IoU threshold: a higher threshold will return more overlapping predictions. 0.9 means that objects with 90% or less overlap will be returned, while 0.1 means objects with more than 10% overlap will not be included.

  • Max Detections: the maximum number of objects the model will return

  • Class Agnostic NMS: whether overlap filtering should compare and exclude objects with just the same class, or all classes

Property Definition

The property definition block allows you to extract relevant information from your data, such as the image size, predicted classes, or number of detected objects. For this example, we’ll be counting the number of objects found by the object detection model.

For the Data property, reference the model predictions. For the Operations, select Count Items. This configuration will return the number of predictions made by the object detection model.

Bounding Box Visualization

Add a bounding box visualization block to visualize the model results. For the image parameter, select the input image. For the predictions, select the model results. You can optionally change the color and size of the bounding boxes using the optional configuration properties.

Label Visualization

In addition to drawing bounding boxes, we’ll also want to display the class names of the predictions. To do this, add a Label Visualization block after the bounding box visualization. In order to draw both bounding boxes and labels on the same image, you’ll want to set the reference input image as the bounding_box_visualization image, instead of referencing the input image. This will draw the labels on top of the bounding boxes.

You can change the optional Text parameter to change the display text from class name, to confidence, to class name and confidence.

Save Changes

When you have finished building your Workflow, click "Save Workflow." If you have deployed the Workflow, your saved Workflow will start running on all devices where the Workflow has been deployed.

Now that you have a completed workflow, it's time to test it.

First, add an Object Detection Model block. You can choose between a public pre-trained model, such as YOLOv8n trained on , or a fine-tuned model in your workspace. I’ll go ahead with the pre-trained yolov8n model to detect people and vehicles.

COCO
block documentation
final workflow template
Detect, Count, and Visualize Workflow
Model Comparison Workflow