Keypoint Detection
Run inference on your object detection models hosted on Roboflow.
To run inference through our hosted API using Python, use the roboflow Python package:
from roboflow import Roboflow
rf = Roboflow(api_key="API_KEY")
project = rf.workspace().project("MODEL_ENDPOINT")
model = project.version(VERSION).model
# infer on a local image
print(model.predict("your_image.jpg", confidence=40, overlap=30).json())
# visualize your prediction
# model.predict("your_image.jpg", confidence=40, overlap=30).save("prediction.jpg")
# infer on an image hosted elsewhere
# print(model.predict("URL_OF_YOUR_IMAGE", hosted=True, confidence=40, overlap=30).json())Linux or MacOS
Retrieving JSON predictions for a local file called YOUR_IMAGE.jpg:
base64 YOUR_IMAGE.jpg | curl -d @- \
"https://detect.roboflow.com/your-model/42?api_key=YOUR_KEY"Inferring on an image hosted elsewhere on the web via its URL (don't forget to URL encode it):
curl -X POST "https://detect.roboflow.com/your-model/42?\
api_key=YOUR_KEY&\
image=https%3A%2F%2Fi.imgur.com%2FPEEvqPN.png"Windows
You will need to install curl for Windows and GNU's base64 tool for Windows. The easiest way to do this is to use the git for Windows installer which also includes the curl and base64 command line tools when you select "Use Git and optional Unix tools from the Command Prompt" during installation.
Then you can use the same commands as above.
Node.js
We're using axios to perform the POST request in this example so first run npm install axios to install the dependency.
Inferring on a Local Image
Inferring on an Image Hosted Elsewhere via URL
Web
We have realtime on-device inference available via roboflow.js; see the documentation here.
Kotlin
Inferring on a Local Image
Inferring on an Image Hosted Elsewhere via URL
Java
Inferring on a Local Image
Inferring on an Image Hosted Elsewhere via URL
Gemfile
Gemfile.lock
Inferring on a Local Image
Inferring on an Image Hosted Elsewhere via URL
Inferring on a Local Image
Inferring on an Image Hosted Elsewhere via URL
Inferring on a Local Image
Inferring on an Image Hosted Elsewhere via URL
Inferring on a Local Image
Inferring on an Image Hosted Elsewhere via URL
We are adding code snippets as they are requested by users. If you'd like to integrate the inference API into your Elixir app, please click here to record your upvote.
Response Object Format
The hosted API inference route returns a JSON object containing an array of predictions. Each prediction has the following properties:
x= the horizontal center point of the detected objecty= the vertical center point of the detected objectwidth= the width of the bounding boxheight= the height of the bounding boxclass= the class label of the detected objectconfidence= the model's confidence that the detected object has the correct label and position coordinateskeypoints= an array of keypoint predictionsx= horizontal center of keypoint (relative to image top-left corner)y= vertical center of keypoint (relative to image top-left corner)class_name= name of keypointclass_id= id of keypoint, maps to skeletonverticesin version record, to map vertex color and skeleton edges, View your Project Versionconfidence= confidence that the keypoint has correct position, and is visible (not occluded or deleted)
Here is an example response object from the REST API:
The image attribute contains the height and width of the image sent for inference. You may need to use these values for bounding box calculations.
Inference API Parameters
Using the Inference API
POST https://detect.roboflow.com/:datasetSlug/:versionNumber
You can POST a base64 encoded image directly to your model endpoint. Or you can pass a URL as the image parameter in the query string if your image is already hosted elsewhere.
Path Parameters
datasetSlug
string
The url-safe version of the dataset name. You can find it in the web UI by looking at the URL on the main project view or by clicking the "Get curl command" button in the train results section of your dataset version after training your model.
version
number
The version number identifying the version of of your dataset
Query Parameters
image
string
URL of the image to add. Use if your image is hosted elsewhere. (Required when you don't POST a base64 encoded image in the request body.) Note: don't forget to URL-encode it.
classes
string
Restrict the predictions to only those of certain classes. Provide as a comma-separated string. Example: dog,cat Default: not present (show all classes)
overlap
number
The maximum percentage (on a scale of 0-100) that bounding box predictions of the same class are allowed to overlap before being combined into a single box. Default: 30
confidence
number
A threshold for the returned predictions on a scale of 0-100. A lower number will return more predictions. A higher number will return fewer high-certainty predictions. Default: 40
stroke
number
The width (in pixels) of the bounding box displayed around predictions (only has an effect when format is image).
Default: 1
labels
boolean
Whether or not to display text labels on the predictions (only has an effect when format is image).
Default: false
format
string
json - returns an array of JSON predictions. (See response format tab).
image - returns an image with annotated predictions as a binary blob with a Content-Type of image/jpeg. image_and_json - returns an array of JSON predictions, including a visualization field in base64.
Default: json
api_key
string
Your API key (obtained via your workspace API settings page)
Request Body
string
A base64 encoded image. (Required when you don't pass an image URL in the query parameters).
Last updated
Was this helpful?