Build a Workflow
Last updated
Was this helpful?
Last updated
Was this helpful?
A workflow is made up of blocks, which perform specific tasks, such running model inference, performing logic, or interfacing with external services.
For a deeper dive on the list of available block, view our block documentation.
This guide will go over creating a four block workflow to run an object detection model, count predictions, and visualize the model results. Here’s the final workflow template to follow along.
Before we start building, it's important to understand how block connections work.
To add a block in a location, it has to use the previous block as an input. For example, in the workflow shown above, the Property Definition block comes after the Object Detection block since it uses the model block as an input. The Bounding Box Visualization block is to the right, since it doesn't use the output of the Property Definition block, but does reference the model output.
In the example workflow above, we have four distinct pathways, since each branch executes in parallel at runtime, and doesn't rely on the other branch blocks as inputs.
First, add an Object Detection Model block. You can choose between a public pre-trained model, such as YOLOv8n trained on COCO, or a fine-tuned model in your workspace. I’ll go ahead with the pre-trained yolov8n model to detect people and vehicles.
The object detection block has a required image parameter that determines what the model is inferring on. There are several optional parameters, the core ones are described in detail below:
Class Filter: List of classes that the model will return. Note: the model will always only return classes it’s trained on, this allows you to filter out unneeded classes.
Confidence: objects below that confidence will not be returned.
IoU threshold: a higher threshold will return more overlapping predictions. 0.9 means that objects with 90% or less overlap will be returned, while 0.1 means objects with more than 10% overlap will not be included.
Max Detections: the maximum number of objects the model will return
Class Agnostic NMS: whether overlap filtering should compare and exclude objects with just the same class, or all classes
The property definition block allows you to extract relevant information from your data, such as the image size, predicted classes, or number of detected objects. For this example, we’ll be counting the number of objects found by the object detection model.
For the Data property, reference the model predictions. For the Operations, select Count Items. This configuration will return the number of predictions made by the object detection model.
Add a bounding box visualization block to visualize the model results. For the image parameter, select the input image. For the predictions, select the model results. You can optionally change the color and size of the bounding boxes using the optional configuration properties.
In addition to drawing bounding boxes, we’ll also want to display the class names of the predictions. To do this, add a Label Visualization block after the bounding box visualization. In order to draw both bounding boxes and labels on the same image, you’ll want to set the reference input image as the bounding_box_visualization image, instead of referencing the input image. This will draw the labels on top of the bounding boxes.
You can change the optional Text parameter to change the display text from class name, to confidence, to class name and confidence.
When you have finished building your Workflow, click "Save Workflow." If you have deployed the Workflow, your saved Workflow will start running on all devices where the Workflow has been deployed.
Now that you have a completed workflow, it's time to test it.