Run inference on your object detection models hosted on Roboflow.
To run inference through our hosted API using Python, use the roboflow Python package:
from roboflow import Roboflowrf =Roboflow(api_key="API_KEY")project = rf.workspace().project("MODEL_ENDPOINT")model = project.version(VERSION).model# infer on a local imageprint(model.predict("your_image.jpg", confidence=40, overlap=30).json())# visualize your prediction# model.predict("your_image.jpg", confidence=40, overlap=30).save("prediction.jpg")# infer on an image hosted elsewhere# print(model.predict("URL_OF_YOUR_IMAGE", hosted=True, confidence=40, overlap=30).json())
Linux or MacOS
Retrieving JSON predictions for a local file called YOUR_IMAGE.jpg:
You will need to install curl for Windows and GNU's base64 tool for Windows. The easiest way to do this is to use the git for Windows installer which also includes the curl and base64 command line tools when you select "Use Git and optional Unix tools from the Command Prompt" during installation.
Then you can use the same commands as above.
Node.js
We're using axios to perform the POST request in this example so first run npm install axios to install the dependency.
usingSystem;usingSystem.IO;usingSystem.Net;usingSystem.Text;namespaceInferenceLocal{classInferenceLocal {staticvoidMain(string[] args) {byte[] imageArray =System.IO.File.ReadAllBytes(@"YOUR_IMAGE.jpg");string encoded =Convert.ToBase64String(imageArray);byte[] data =Encoding.ASCII.GetBytes(encoded);string API_KEY =""; // Your API Keystring MODEL_ENDPOINT ="dataset/v"; // Set model endpoint // Construct the URLstring uploadURL ="https://detect.roboflow.com/"+ MODEL_ENDPOINT +"?api_key="+ API_KEY+"&name=YOUR_IMAGE.jpg"; // Service Request ConfigServicePointManager.Expect100Continue=true;ServicePointManager.SecurityProtocol=SecurityProtocolType.Tls12; // Configure RequestWebRequest request =WebRequest.Create(uploadURL);request.Method="POST";request.ContentType="application/x-www-form-urlencoded";request.ContentLength=data.Length; // Write Datausing (Stream stream =request.GetRequestStream()) {stream.Write(data,0,data.Length); } // Get Responsestring responseContent =null;using (WebResponse response =request.GetResponse()) {using (Stream stream =response.GetResponseStream()) {using (StreamReader sr99 =newStreamReader(stream)) { responseContent =sr99.ReadToEnd(); } } }Console.WriteLine(responseContent); } }}
Inferring on an Image Hosted Elsewhere via URL
usingSystem;usingSystem.IO;usingSystem.Net;usingSystem.Web;namespaceInferenceHosted{classInferenceHosted {staticvoidMain(string[] args) {string API_KEY =""; // Your API Keystring imageURL ="https://i.ibb.co/jzr27x0/YOUR-IMAGE.jpg";string MODEL_ENDPOINT ="dataset/v"; // Set model endpoint // Construct the URLstring uploadURL ="https://detect.roboflow.com/"+ MODEL_ENDPOINT+"?api_key="+ API_KEY+"&image="+HttpUtility.UrlEncode(imageURL); // Service Point ConfigServicePointManager.Expect100Continue=true;ServicePointManager.SecurityProtocol=SecurityProtocolType.Tls12; // Configure Http RequestWebRequest request =WebRequest.Create(uploadURL);request.Method="POST";request.ContentType="application/x-www-form-urlencoded";request.ContentLength=0; // Get Responsestring responseContent =null;using (WebResponse response =request.GetResponse()) {using (Stream stream =response.GetResponseStream()) {using (StreamReader sr99 =newStreamReader(stream)) { responseContent =sr99.ReadToEnd(); } } }Console.WriteLine(responseContent); } }}
We are adding code snippets as they are requested by users. If you'd like to integrate the inference API into your Elixir app, please click here to record your upvote.
Response Object Format
The hosted API inference route returns a JSON object containing an array of predictions. Each prediction has the following properties:
x = the horizontal center point of the detected object
y = the vertical center point of the detected object
width = the width of the bounding box
height = the height of the bounding box
class = the class label of the detected object
confidence = the model's confidence that the detected object has the correct label and position coordinates
Here is an example response object from the REST API:
You can POST a base64 encoded image directly to your model endpoint. Or you can pass a URL as the image parameter in the query string if your image is already hosted elsewhere.
Path Parameters
Name
Type
Description
datasetSlug
string
The url-safe version of the dataset name. You can find it in the web UI by looking at the URL on the main project view or by clicking the "Get curl command" button in the train results section of your dataset version after training your model.
version
number
The version number identifying the version of of your dataset
Query Parameters
Name
Type
Description
image
string
URL of the image to add. Use if your image is hosted elsewhere. (Required when you don't POST a base64 encoded image in the request body.)
Note: don't forget to URL-encode it.
classes
string
Restrict the predictions to only those of certain classes. Provide as a comma-separated string.
Example: dog,cat
Default: not present (show all classes)
overlap
number
The maximum percentage (on a scale of 0-100) that bounding box predictions of the same class are allowed to overlap before being combined into a single box.
Default: 30
confidence
number
A threshold for the returned predictions on a scale of 0-100. A lower number will return more predictions. A higher number will return fewer high-certainty predictions.
Default: 40
stroke
number
The width (in pixels) of the bounding box displayed around predictions (only has an effect when format is image).
Default: 1
labels
boolean
Whether or not to display text labels on the predictions (only has an effect when format is image).
Default: false
format
string
json - returns an array of JSON predictions. (See response format tab).
image - returns an image with annotated predictions as a binary blob with a Content-Type of image/jpeg.
Default: json
api_key
string
Your API key (obtained via your workspace API settings page)
Request Body
Name
Type
Description
string
A base64 encoded image. (Required when you don't pass an image URL in the query parameters).
{
"Message": "User is not authorized to access this resource"
}
Drawing a Box from the Inference API JSON Output
Frameworks and packages for rendering bounding boxes can differ in positional formats. Given the response JSON object's properties, a bounding box can always be drawn using some combination of the following rules:
the center point will always be (x,y)
the corner points (x1, y1) and (x2, y2) can be found using:
x1 = x - (width/2)
y1 = y - (height/2)
x2 = x + (width/2)
y2 = y + (height/2)
The corner points approach is a common pattern and seen in libraries such as Pillow when building the box object to render bounding boxes within an Image.
Don't forget to iterate through all detections found when working with predictions!
# example box object from the Pillow libraryfor bounding_box in detections: x1 = bounding_box['x']- bounding_box['width']/2 x2 = bounding_box['x']+ bounding_box['width']/2 y1 = bounding_box['y']- bounding_box['height']/2 y2 = bounding_box['y']+ bounding_box['height']/2 box = (x1, x2, y1, y2)