Keypoint Detection

Run inference on your keypoint detection models hosted on Roboflow.

Customize your deployment for your hardware and desired method of running your model with the Roboflow Deployment Quickstart.

To run a keypoint detection model on your hardware, first install Inference and set up an Inference server. You will need to have Docker installed.

pip install inference
inference server start

Once you have installed Inference, use the following code to run inference on an image:

# import client from inference_sdk
from inference_sdk import InferenceHTTPClient,
# import os to get the API_KEY from the environment
import os

# set the project_id, model_version, image_url
project_id = ""
model_version = 1
image = ""

# create a client object
client = InferenceHTTPClient(
    api_url="http://localhost:9001",
    api_key=os.environ["ROBOFLOW_API_KEY"],
)

# run inference on the image
results = client.infer(image, model_id=f"{project_id}/{model_version}")

# print the results
print(results)

Above, specify:

  1. project_id, model_version: Your project ID and model version number. Learn how to retrieve your project ID and model version number.

  2. image: The name of the the image you want to run inference on.

You can replace image with a PIL array, too. This is ideal if you already have an image in memory.

Then, export your Roboflow API key into your environment:

export ROBOFLOW_API_KEY=<your api key>

Learn how to retrieve your API key.

Last updated