Links

Enterprise CPU

Deploy your model to on CPU on your own infrastructure.

Installing and Using the CPU Inference Server

The inference API is available as a Docker container for 64-bit Intel and AMD machines, it is not compatible with Mac OS based devices. To install, simply pull the container:
sudo docker pull roboflow/inference-server:cpu
Then run it:
sudo docker run --net=host roboflow/inference-server:cpu
You can now use the Inference Server as a drop-in replacement for our Hosted Inference API (see those docs for example code snippets in several programming languages). Use the sample code from the Hosted API but replace https://detect.roboflow.com with http://{INFERENCE-SERVER-IP}:9001 in the API call. For example,
base64 YOUR_IMAGE.jpg | curl -d @- \
"http://10.0.0.1:9001/your-model/42?api_key=YOUR_KEY"
Note: The first call to a model will take a few seconds to download your weights and initialize them; subsequent predictions will be much quicker.

Handling Large Images With Tiling

In some cases, you might need to infer on very large images, where accuracy can degrade significantly. In cases like this, you'll want your inference server to slice these images into smaller tiles before running inference for better accuracy.
To tile with a given pixel width and height, you'll need to curl the inference server with a query parameter containing a pixel dimension. We'll take that pixel dimension and use it to create tiles with those dimensions for width and height. The query parameter should look like &tile=500. This will slice your image into 500x500 pixel tiles before running inference.
Full curl request example:
#slices your image into 300x300 tiles before running inference
base64 your_img.jpg | curl -d @- "http://localhost:9001/[YOUR MODEL]/[YOUR VERSION]?api_key=[YOUR API KEY]&tile=[YOUR TILE DIMENSIONS]"