# API Reference

The Video Inference API can be used through the Roboflow Python SDK and through a REST API.

### Base URL

The API uses the following URL:

```
https://api.roboflow.com
```

### API Methods

We highly recommend accessing video inference via the roboflow Pip package.\
\
The raw Video Inference API at `https://api.roboflow.com` has three methods:

<table><thead><tr><th>METHOD</th><th>DESCRIPTION</th></tr></thead><tbody><tr><td><code>POST</code> <code>/video_upload_signed_url/?api_key={{WORKSPACE_API_KEY}}</code></td><td><p>This endpoint returns a signed URL where a user can upload video.<br><br>The endpoint accepts a JSON input with the filename, like so</p><pre><code>{
    "file_name": "my_video_file.mp4"
}
</code></pre><p>A signed URL is returned</p><pre class="language-json"><code class="lang-json">{
    "signed_url": "https://storage.googleapis.com/roboflow_video_inference_input/GNPwawe7dxWthlZZ24r72VRTV852/oct27_video_file.mp4?X-Goog-Algorithm=GOOG4-RSA-SHA256&#x26;X-Goog-Credential=roboflow-staging%40appspot.gserviceaccount.com%2F2023110......."
}
</code></pre><p><br>You can then use your favorite upload program (like cURL) to PUT your video file to the signed URL.</p><pre class="language-bash"><code class="lang-bash">curl -X PUT "my_signed_url" \
-H "Content-Type: application/octet-stream" \
--data-binary "@your_video.mp4"
</code></pre></td></tr><tr><td><code>POST</code> <code>/videoinfer/?api_key={{WORKSPACE_API_KEY}}</code></td><td><p>The endpoint accepts a json input to schedule a video inference job for processing. Note that the INPUT_URL can be any publicly available URL, you don't need to first create a signed URL and then upload a video to Roboflow.</p><p>An example showing the body of the request is shown below:</p><pre><code>{
    "input_url": "{{INPUT_URL}}",
    "infer_fps": 5,
    "models": [
        {
            "model_id": "rock-paper-scissors-presentation",
            "model_version": "4",
            "inference_type": "object-detection",
            "inference_params": {"confidence": 0.4}
        }
    ]
}
</code></pre><p>Please be noted that <code>models[*].inference_params</code> is optional.</p><p>The response is a JSON string, like so</p><pre class="language-json"><code class="lang-json">{
    "job_id": "fec28362-f7d9-4cc0-a805-5e94495d063d",
    "message": "This endpoint will create videojob"
}
</code></pre><p><br>You can specify multiple models in the <code>models</code> array. The <code>infer_fps</code> field should be at least set to <code>1</code> and its value should not exceed the video frame-rate. For most use-cases, the video frame rate is an exact multiple of the <code>infer_fps</code> .</p></td></tr><tr><td><code>GET</code> <code>/videoinfer/?api_key={{WORKSPACE_API_KEY}}&#x26;job_id={{JOB_ID}}</code></td><td>This endpoint returns the current status of the job. This endpoint is rate-limited, please don't poll this endpoint more than once per minute.<br><br>When the job is successful, the returned JSON's <code>status</code> key is set to 0, and the <code>output_signed_url</code> key contains the download link for the video inference results.<br><br>If the <code>status</code> is set to 1, it indicates that the job processing is not complete. Any higher values indicates job failure.</td></tr><tr><td></td><td></td></tr></tbody></table>

Once you have downloaded the JSON file stored from the `output_signed_url` location, you can parse it to obtain inference information. The format of the json file is [described here](https://inference.roboflow.com/workflows/definitions/).
