Inference

Leverage your custom trained model for cloud-hosted inference.

Training

See the documentation page for Roboflow's one-click training solution.

After training, your model will post to an endpoint for inference.

Inference

There are multiple ways to use your trained model for inference, all of which are accessible with quick links in the web UI after your model has finished training:

  • You can POST a base64-encoded image to your endpoint with curl or pass a URL via the query string.

  • We have a sample web app you can use as an example for building your endpoint into a custom application.

  • We have sample code for making requests to your inference endpoint in a variety of languages.

The Example Web App

The easiest way to familiarize yourself with the inference endpoint is to visit the Example Web App. To use the Web App, simply input your model and access_token. These will be pre-filled for you after training completes if you click through via the web API.

Then select an image via Choose File. After you have chosen the settings you want, click Run Inference.

On the left side of the screen, you will see example JavaScript code for posting a base64-encoded image to the inference endpoint. Within the form portion of the Web App, you can experiment with changing different API parameters when posting to the API.

Obtaining Your Model Endpoint

To use the inference API, you will need your model endpoint and your API key. Your API key can be retrieved from your account page. Your model endpoint is a unique string composed of your individual or team identifier, the model ID, and the version number. The easiest way to retrieve it is via the web UI by clicking the "curl command" link:

The model endpoint is highlighted in blue. (The API key here has been replaced with [REMOVED])

post
Using the Inference API

https://infer.roboflow.com/:model-endpoint
You can POST a base64 encoded image directly to your model endpoint. Or you can pass a URL as the image parameter in the query string if your image is already hosted elsewhere.
Request
Response
Request
Path Parameters
model-endpoint
optional
string
The unique identifier for your model. The easiest way to determine this is via the web UI's "Get curl command" link.
Query Parameters
image
optional
string
URL of the image to add. Use if your image is hosted elsewhere. (Required when you don't POST a base64 encoded image in the request body.) Note: don't forget to URL-encode it.
classes
optional
string
Restrict the predictions to only those of certain classes. Provide as a comma-separated string. Example: dog,cat Default: not present (show all classes)
overlap
optional
number
The maximum percentage (on a scale of 0-100) that bounding box predictions of the same class are allowed to overlap before being combined into a single box. Default: 30
confidence
optional
number
A threshold for the returned predictions on a scale of 0-100. A lower number will return more predictions. A higher number will return fewer high-certainty predictions. Default: 40
stroke
optional
number
The width (in pixels) of the bounding box displayed around predictions (only has an effect when format is image). Default: 5
labels
optional
boolean
Whether or not to display text labels on the predictions (only has an effect when format is image). Default: false
format
optional
string
json - returns an array of JSON predictions. (See response format tab). image - returns an image with annotated predictions as a binary blob with a Content-Type of image/jpeg. Default: json
access_token
required
string
Your API key (obtained via your account page)
Body Parameters
optional
string
A base64 encoded image. (Required when you don't pass an image URL in the query parameters).
Response
200: OK
JSON format predictions.
{
"predictions": [{
"x": 234.0,
"y": 363.5,
"width": 160,
"height": 197,
"class": "hand",
"confidence": 0.943
}, {
"x": 504.5,
"y": 363.0,
"width": 215,
"height": 172,
"class": "hand",
"confidence": 0.917
}, {
"x": 1112.5,
"y": 691.0,
"width": 139,
"height": 52,
"class": "hand",
"confidence": 0.87
}, {
"x": 78.5,
"y": 700.0,
"width": 139,
"height": 34,
"class": "hand",
"confidence": 0.404
}]
}
403: Forbidden
If your api_key is not authorized to access the model.
{
"Message": "User is not authorized to access this resource"
}

Code Snippets

For your convenience, we've provided code snippets for calling this endpoint in various programming languages. If you need help integrating the inference API into your project don't hesitate to reach out.

All examples upload to an example dataset with a model-endpoint of xx-your-model--1. You can easily find your dataset's identifier by looking at the curl command shown in the Roboflow web interface after your model has finished training.

cURL
Python
Javascript
iOS
Android
Ruby
PHP
Go
.NET
Elixir
cURL

Linux or MacOS

Retrieving JSON predictions for a local file called YOUR_IMAGE.jpg:

base64 YOUR_IMAGE.jpg | curl -d @- \
"https://infer.roboflow.com/xx-your-model--1?access_token=YOUR_KEY"

Inferring on an image hosted elsewhere on the web via its URL (don't forget to URL encode it):

curl -X POST "https://infer.roboflow.com/xx-your-model--1?\
access_token=YOUR_KEY&\
image=https%3A%2F%2Fi.imgur.com%2FPEEvqPN.png"

Windows

You will need to install curl for Windows and GNU's base64 tool for Windows. The easiest way to do this is to use the git for Windows installer which also includes the curl and base64 command line tools when you select "Use Git and optional Unix tools from the Command Prompt" during installation.

Then you can use the same commands as above.

Python

Uploading a Local Image

To install dependencies, pip install requests pillow

import requests
import base64
import io
from PIL import Image
# Load Image with PIL
image = Image.open("YOUR_IMAGE.jpg").convert("RGB")
# Convert to JPEG Buffer
buffered = io.BytesIO()
image.save(buffered, quality=90, format="JPEG")
# Base 64 Encode
img_str = base64.b64encode(buffered.getvalue())
img_str = img_str.decode("ascii")
# Construct the URL
upload_url = "".join([
"https://infer.roboflow.com/xx-your-model--1",
"?access_token=YOUR_KEY",
"&name=YOUR_IMAGE.jpg"
])
# POST to the API
r = requests.post(upload_url, data=img_str, headers={
"Content-Type": "application/x-www-form-urlencoded"
})
# Output result
print(r.json())

Inferring on an Image Hosted Elsewhere via URL

import requests
import urllib.parse
# Construct the URL
img_url = "https://i.imgur.com/PEEvqPN.png"
upload_url = "".join([
"https://infer.roboflow.com/xx-your-model--1",
"?access_token=YOUR_KEY",
"&image=" + urllib.parse.quote_plus(img_url)
])
# POST to the API
r = requests.post(upload_url)
# Output result
print(r.json())
Javascript

Node.js

We're using axios to perform the POST request in this example so first run npm install axios to install the dependency.

Inferring on a Local Image

const axios = require("axios");
const fs = require("fs");
const image = fs.readFileSync("YOUR_IMAGE.jpg", {
encoding: "base64"
});
axios({
method: "POST",
url: "https://infer.roboflow.com/xx-your-model--1",
params: {
access_token: "YOUR_KEY"
},
data: image,
headers: {
"Content-Type": "application/x-www-form-urlencoded"
}
})
.then(function(response) {
console.log(response.data);
})
.catch(function(error) {
console.log(error.message);
});

Inferring on an Image Hosted Elsewhere via URL

const axios = require("axios");
axios({
method: "POST",
url: "https://infer.roboflow.com/xx-your-model--1",
params: {
access_token: "YOUR_KEY",
image: "https://i.imgur.com/PEEvqPN.png"
}
})
.then(function(response) {
console.log(response.data);
})
.catch(function(error) {
console.log(error.message);
});

Web

We are currently beta testing roboflow.js, a browser-based JavaScript library which, among other things, includes realtime in-browser inference. If you're interested, please reach out.

iOS

We are adding code snippets as they are requested by users. If you'd like to integrate the inference API into your iOS app, please click below to record your upvote.

Swift

Click here to request a Swift snippet.

Objective C

Click here to request an Objective-C snippet.

Android

We are adding code snippets as they are requested by users. If you'd like to integrate the inference API into your Android app, please click below to record your upvote.

Kotlin

Click here to request a Kotlin snippet.

Java

Click here to request a Java snippet.

Ruby

We are adding code snippets as they are requested by users. If you'd like to integrate the inference API into your Ruby app, please click here to record your upvote.

PHP

We are adding code snippets as they are requested by users. If you'd like to integrate the inference API into your PHP app, please click here to record your upvote.

Go

We are adding code snippets as they are requested by users. If you'd like to integrate the inference API into your Go app, please click here to record your upvote.

.NET

We are adding code snippets as they are requested by users. If you'd like to integrate the inference API into your .NET app, please click here to record your upvote.

Elixir

We are adding code snippets as they are requested by users. If you'd like to integrate the inference API into your Elixir app, please click here to record your upvote.