API Reference

The Video Inference API can be used through the Roboflow Python SDK and through a REST API.

Base URL

The API uses the following URL:

https://api.roboflow.com

API Methods

We highly recommend accessing video inference via the roboflow Pip package. The raw Video Inference API at https://api.roboflow.com has three methods:

METHOD
DESCRIPTION

POST /video_upload_signed_url/?api_key={{WORKSPACE_API_KEY}}

This endpoint returns a signed URL where a user can upload video. The endpoint accepts a JSON input with the filename, like so

{
    "file_name": "my_video_file.mp4"
}

A signed URL is returned

```json
{
    "signed_url": "https://storage.googleapis.com/roboflow_video_inference_input/GNPwawe7dxWthlZZ24r72VRTV852/oct27_video_file.mp4?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=roboflow-staging%40appspot.gserviceaccount.com%2F2023110......."
```

(You can then use your favorite upload program (like curl) to POST your video file to the signed URL.)

POST /videoinfer/?api_key={{WORKSPACE_API_KEY}}

The endpoint accepts a json input to schedule a video inference job for processing. Note that the INPUT_URL can be any publicly available URL, you don't need to first create a signed URL and then upload a video to Roboflow.

An example showing the body of the request is shown below:

{
    "input_url": "{{INPUT_URL}}",
    "infer_fps": 5,
    "models": [
        {
            "model_id": "rock-paper-scissors-presentation",
            "model_version": "4",
            "inference_type": "object-detection",
            "inference_params": {"confidence": 0.4}
        }
    ]
}

Please be noted that models[*].inference_params is optional.

The response is a JSON string, like so

{
    "job_id": "fec28362-f7d9-4cc0-a805-5e94495d063d",
    "message": "This endpoint will create videojob"
}

You can specify multiple models in the models array. The infer_fps field should be at least set to 1 and its value should not exceed the video frame-rate. For most use-cases, the video frame rate is an exact multiple of the infer_fps .

GET /videoinfer/?api_key={{WORKSPACE_API_KEY}}&job_id={{JOB_ID}}

This endpoint returns the current status of the job. This endpoint is rate-limited, please don't poll this endpoint more than once per minute. When the job is successful, the returned JSON's status key is set to 0, and the output_signed_url key contains the download link for the video inference results. If the status is set to 1, it indicates that the job processing is not complete. Any higher values indicates job failure.

Once you have downloaded the JSON file stored from the output_signed_url location, you can parse it to obtain inference information. The format of the json file is described here.

Last updated