The Roboflow Inference API can be deployed locally using docker, optimized for your target hardware.
Roboflow maintains and publishes docker containers to support various target environments. Below is the current list of available docker containers for local deploy via docker.
Accelerate your inference on x86_64 CPU architectures.
docker run -it --rm -p 9001:9001 roboflow/roboflow-inference-server-cpu
docker run -it --rm -p 9001:9001 --gpus all roboflow/roboflow-inference-server-trt