Keypoint Detection

Run inference on your object detection models hosted on Roboflow.

To run inference through our hosted API using Python, use the roboflow Python package:

from roboflow import Roboflow
rf = Roboflow(api_key="API_KEY")
project = rf.workspace().project("MODEL_ENDPOINT")
model = project.version(VERSION).model

# infer on a local image
print(model.predict("your_image.jpg", confidence=40, overlap=30).json())

# visualize your prediction
# model.predict("your_image.jpg", confidence=40, overlap=30).save("prediction.jpg")

# infer on an image hosted elsewhere
# print(model.predict("URL_OF_YOUR_IMAGE", hosted=True, confidence=40, overlap=30).json())

Response Object Format

The hosted API inference route returns a JSON object containing an array of predictions. Each prediction has the following properties:

  • x = the horizontal center point of the detected object

  • y = the vertical center point of the detected object

  • width = the width of the bounding box

  • height = the height of the bounding box

  • class = the class label of the detected object

  • confidence = the model's confidence that the detected object has the correct label and position coordinates

  • keypoints = an array of keypoint predictions

    • x = horizontal center of keypoint (relative to image top-left corner)

    • y = vertical center of keypoint (relative to image top-left corner)

    • class_name = name of keypoint

    • class_id = id of keypoint, maps to skeleton vertices in version record, to map vertex color and skeleton edges, View your Project Version

    • confidence = confidence that the keypoint has correct position, and is visible (not occluded or deleted)

Here is an example response object from the REST API:

{
    "predictions": [
        {
            "x": 189.5,
            "y": 100,
            "width": 163,
            "height": 186,
            "class": "helmet",
            "confidence": 0.544,
            "keypoints": [
                {
                    "x": 189, 
                    "y": 20,
                    "class_name": "top",
                    "class_id": 0,
                    "confidence": 0.91
                },
                {
                    "x": 188, 
                    "y": 180,
                    "class_name": "bottom",
                    "class_id": 1,
                    "confidence": 0.93
                }
            ]
        }
    ],
    "image": {
        "width": 2048,
        "height": 1371
    }
}

The image attribute contains the height and width of the image sent for inference. You may need to use these values for bounding box calculations.

Inference API Parameters

Using the Inference API

POST https://detect.roboflow.com/:datasetSlug/:versionNumber

You can POST a base64 encoded image directly to your model endpoint. Or you can pass a URL as the image parameter in the query string if your image is already hosted elsewhere.

Path Parameters

Query Parameters

Request Body

{
    "predictions": [{
        "x": 234.0,
        "y": 363.5,
        "width": 160,
        "height": 197,
        "class": "hand",
        "confidence": 0.943
    }, {
        "x": 504.5,
        "y": 363.0,
        "width": 215,
        "height": 172,
        "class": "hand",
        "confidence": 0.917
    }, {
        "x": 1112.5,
        "y": 691.0,
        "width": 139,
        "height": 52,
        "class": "hand",
        "confidence": 0.87
    }, {
        "x": 78.5,
        "y": 700.0,
        "width": 139,
        "height": 34,
        "class": "hand",
        "confidence": 0.404
    }]
}

Last updated