Dedicated Deployments
Last updated
Last updated
A Dedicated Deployment is a remote server managed by Roboflow on which you can run computer vision models supported by the Roboflow Inference server. This includes object detection, segmentation, classification, and keypoint models trained on or uploaded to Roboflow, as well as foundation models like CLIP.
Dedicated Deployments allow you to have cloud servers allocated specifically for your use.
Dedicated Deployments are accessible with your workspace's API key. You can send inference requests to a Dedicated Deployment as if it is running locally.
There are two types of Dedicated Deployments:
Dev CPU (A CPU machine with no GPU)
Dev GPU
All servers will run Roboflow Inference, our on-device inference server. Review the Roboflow Inference documentation to learn more about all of the features available.
Roboflow Inference automatically uses GPUs in your Dedicated Deployment if you chose a GPU-enabled deployment type.
You can provision, manage, and delete Dedicated Deployments in the Roboflow Workflows web application.
Roboflow Workflows is a low-code, web-based application builder for creating computer vision applications.
To create a Dedicated Deployment, first create a Roboflow Workflow. To do so, click on Workflows on the left tab in the Roboflow dashboard, then click "Create Workflow":
Then, click on the "Running on Hosted API" link in the top left corner:
Click Dedicated Deployments to create and see your Dedicated Deployments:
Set a name for your Deployment, then choose whether you need a CPU or GPU.
Then, click "Create Dedicated Deployment".
Your Deployment will be provisioned. It may take anywhere from a few seconds to a few minutes to provision your deployment.
When your Deployment is ready, the status will be updated to Ready. You can then click "Connect" to use your Deployment with your Workflow in the Workflows editor: