Keypoint Detection
Run inference on your object detection models hosted on Roboflow.
Last updated
Was this helpful?
Run inference on your object detection models hosted on Roboflow.
Last updated
Was this helpful?
To run inference through our hosted API using Python, use the roboflow
Python package:
The hosted API inference route returns a JSON
object containing an array of predictions. Each prediction has the following properties:
x
= the horizontal center point of the detected object
y
= the vertical center point of the detected object
width
= the width of the bounding box
height
= the height of the bounding box
class
= the class label of the detected object
confidence
= the model's confidence that the detected object has the correct label and position coordinates
keypoints
= an array of keypoint predictions
x
= horizontal center of keypoint (relative to image top-left corner)
y
= vertical center of keypoint (relative to image top-left corner)
class_name
= name of keypoint
class_id
= id of keypoint, maps to skeleton vertices
in version record, to map vertex color and skeleton edges, View your Project Version
confidence
= confidence that the keypoint has correct position, and is visible (not occluded or deleted)
Here is an example response object from the REST API:
The image
attribute contains the height and width of the image sent for inference. You may need to use these values for bounding box calculations.
POST
https://detect.roboflow.com/:datasetSlug/:versionNumber
You can POST a base64 encoded image directly to your model endpoint. Or you can pass a URL as the image
parameter in the query string if your image is already hosted elsewhere.
datasetSlug
string
The url-safe version of the dataset name. You can find it in the web UI by looking at the URL on the main project view or by clicking the "Get curl command" button in the train results section of your dataset version after training your model.
version
number
The version number identifying the version of of your dataset
image
string
URL of the image to add. Use if your image is hosted elsewhere. (Required when you don't POST a base64 encoded image in the request body.) Note: don't forget to URL-encode it.
classes
string
Restrict the predictions to only those of certain classes. Provide as a comma-separated string. Example: dog,cat Default: not present (show all classes)
overlap
number
The maximum percentage (on a scale of 0-100) that bounding box predictions of the same class are allowed to overlap before being combined into a single box. Default: 30
confidence
number
A threshold for the returned predictions on a scale of 0-100. A lower number will return more predictions. A higher number will return fewer high-certainty predictions. Default: 40
stroke
number
The width (in pixels) of the bounding box displayed around predictions (only has an effect when format
is image
).
Default: 1
labels
boolean
Whether or not to display text labels on the predictions (only has an effect when format
is image
).
Default: false
format
string
json - returns an array of JSON predictions. (See response format tab).
image - returns an image with annotated predictions as a binary blob with a Content-Type
of image/jpeg
. image_and_json - returns an array of JSON predictions, including a visualization field in base64.
Default: json
api_key
string
Your API key (obtained via your workspace API settings page)
string
A base64 encoded image. (Required when you don't pass an image URL in the query parameters).
We're using to perform the POST request in this example so first run npm install axios
to install the dependency.
We have realtime on-device inference available via roboflow.js
; see .
We are adding code snippets as they are requested by users. If you'd like to integrate the inference API into your Elixir app, please .