Keypoint Detection
Run inference on your keypoint detection models hosted on Roboflow.
Customize your deployment for your hardware and desired method of running your model with the Roboflow Deployment Quickstart.
To run a keypoint detection model on your hardware, first install Inference and set up an Inference server. You will need to have Docker installed.
Once you have installed Inference, use the following code to run inference on an image:
Above, specify:
project_id
,model_version
: Your project ID and model version number. Learn how to retrieve your project ID and model version number.image
: The name of the the image you want to run inference on.
You can replace image
with a PIL array, too. This is ideal if you already have an image in memory.
Then, export your Roboflow API key into your environment:
Last updated