# YOLOv7

We support YOLOv7 instance segmentation inferencing via our [Serverless Hosted API](/deploy/serverless-hosted-api-v2.md). Training YOLOv7 is not supported on Roboflow, but you can [upload your own weights](/deploy/upload-custom-weights.md) and run inference against them.

For self-hosted deployment, see [Roboflow Inference](https://inference.roboflow.com/).

YOLOv7 input size is set when you train your model outside Roboflow (typical values: 640x640 or 1280x1280).

## Default COCO aliases

There are no pretrained YOLOv7 aliases. You must train YOLOv7 instance segmentation outside of Roboflow, upload the weights to a Project, then call your own `model_id` against the Serverless Hosted API.

## Code sample

Install the Inference SDK and [supervision](https://supervision.roboflow.com/) for decoding and drawing masks:

```bash
pip install inference-sdk supervision opencv-python
```

Run inference against your own YOLOv7 instance segmentation model, then use supervision to render the predicted masks and labels onto the source image. Replace `model_id` with your Project's value, and pass your [Roboflow API Key](https://app.roboflow.com/settings/api) via the `API_KEY` environment variable.

```python
import os
import urllib.request

import cv2
import supervision as sv
from inference_sdk import InferenceHTTPClient

image_url = "https://storage.googleapis.com/com-roboflow-marketing/notebooks/examples/bicycle.png"
image_path = "bicycle.png"
urllib.request.urlretrieve(image_url, image_path)

image = cv2.imread(image_path)

client = InferenceHTTPClient(
    api_url="https://serverless.roboflow.com",
    api_key=os.getenv("API_KEY"),
)
results = client.infer(image_path, model_id="your-project/1")

detections = sv.Detections.from_inference(results)

mask_annotator = sv.MaskAnnotator()
label_annotator = sv.LabelAnnotator()

labels = [
    f"{cls} {conf:.2f}"
    for cls, conf in zip(detections.data.get("class_name", []), detections.confidence)
]

annotated = mask_annotator.annotate(scene=image.copy(), detections=detections)
annotated = label_annotator.annotate(scene=annotated, detections=detections, labels=labels)

cv2.imwrite("annotated.png", annotated)
```

{% hint style="info" %}
Set `api_url` to match your deployment target:

* `https://serverless.roboflow.com` for the Serverless Hosted API.
* `http://localhost:9001` for a local [Inference](https://inference.roboflow.com/) server.
* Your [Dedicated Deployment](/deploy/dedicated-deployments.md) URL for a private endpoint.
  {% endhint %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.roboflow.com/deploy/supported-models/yolov7.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
