Links

Local Deploy

The Roboflow Inference API can be deployed locally using docker, optimized for your target hardware.
Roboflow maintains and publishes docker containers to support various target environments. Below is the current list of available docker containers for local deploy via docker.
This list only includes supported targets for the Inference 2.0 BETA and is not a comprehensive list of all Roboflow supported deploy targets. To see docs for other supported deploy targets, see Inference - Object Detection.

CPU

Accelerate your inference on x86_64 CPU architectures.
docker run -it --rm -p 9001:9001 roboflow/roboflow-inference-server-cpu

TRT

To use the TRT container, you must first install nvidia-container-runtime.
docker run -it --rm -p 9001:9001 --gpus all roboflow/roboflow-inference-server-trt
The TRT image will only function on NVIDIA hardware with support for TensorRT 22.09.