CPU
Deploy your model to on CPU on your own infrastructure.
The inference API is available as a Docker container for 64-bit Intel and AMD machines, it is not compatible with macOS based devices.
x86 CPU Installation
To install the Roboflow Inference Server on an x86 CPU, run the following command:
If required, run the command with sudo privileges.
ARM CPU
Accelerate your inference on ARMv8 CPU architectures. This method is also confirmed to work with edge devices such as the Raspberry Pi.
If required, run the command with sudo privileges.
Use the Inference Server
You can use the Inference Server as a drop-in replacement for our Hosted Inference API (see those docs for example code snippets in several programming languages). Use the sample code from the Hosted API but replace https://detect.roboflow.com
with http://{INFERENCE-SERVER-IP}:9001
in the API call. For example,
Note: The first call to a model will take a few seconds to download your weights and initialize them; subsequent predictions will run faster.
Last updated