Deploy a Model or Workflow
Learn how to deploy workflows and models trained on or uploaded to Roboflow.
We support both managed deployments and self-hosted deployment of both models and workflows.
Managed Deployments
These options leverage Roboflow's cloud infrastructure to run your models and workflows, eliminating the need for you to manage your own hardware or software.
Self-Hosted Deployment
You can also deploy your models and workflows on self-hosted Roboflow Inference, which provides greater control over your environment, resources, and latency.
This option requires infrastructure management and expertise.
What is Inference?
In computer vision, inference refers to the process of using a trained model to analyze new images or videos and make predictions. For example, an object detection model might be used to identify and locate objects in a video stream, or a classification model might be used to categorize images based on their content.
Roboflow Inference is an open-source project that provides a powerful and flexible framework for deploying computer vision models and workflows. It is s the engine that powers most of Roboflows managed deployment services. You can also self host it or use it to deploy your vision workflows to edge devices. Roboflow Inference offers a range of features and capabilities, including:
Support for various model architectures and tasks, including object detection, classification, instance segmentation, and more.
Workflows, which lets you build computer vision applications by combining different models, pre-built logic, and external applications by choosing from hundreds of building Blocks.
Hardware acceleration for optimized performance on different devices, including CPUs, GPUs, and edge devices like NVIDIA Jetson.
Multiprocessing for efficient use of resources.
Video decoding for seamless processing of video streams.
HTTP interface, APIs and docker images to simplify deployment
Integration with Roboflow's hosted deployment options and the Roboflow platform.
What is a Workflow?
Workflows enable you to build complex computer vision applications by combining different models, pre-built logic, and external applications. They provide a visual, low-code environment for designing and deploying sophisticated computer vision pipelines.
With Workflows, you can:
Chain multiple models together to perform complex tasks.
Add custom logic and decision-making to your applications.
Integrate with external systems and APIs.
Track, count, time, measure, and visualize objects in images and videos.
Choosing the Right Deployment Option
There is great guide on how to choose the best deployment method for your use case in the inference Getting Started guide.
The best deployment option for you depends on your specific needs and requirements. Consider the following factors when making your decision:
Scalability: If your application needs to handle varying levels of traffic or data volume, the serverless API offers excellent scalability for real-time use-cases; otherwise, Batch Processing is a suggested option.
Latency: If you need low latency or video processing, dedicated deployments or self-hosted deployments with powerful hardware might be the best choice.
Control: Self-hosted deployments provide the most control over your environment and resources.
Expertise: Self-hosted deployments require more technical expertise to set up and manage.
Last updated
Was this helpful?


