# YOLOv5

We support YOLOv5 inferencing via our [Serverless Hosted API](/deploy/serverless-hosted-api-v2.md). YOLOv5 is available in two task variants:

* Object detection
* Instance segmentation

{% hint style="info" %}
Training YOLOv5 on Roboflow is deprecated for new projects. Uploading your own YOLOv5 weights to a Roboflow Project and running inference against the Serverless Hosted API remains supported.
{% endhint %}

For self-hosted deployment, see [Roboflow Inference](https://inference.roboflow.com/).

YOLOv5 input size is set when you train your model (typical values: 640x640 or 1280x1280).

## Default COCO aliases

YOLOv5 has no pretrained COCO aliases on the Serverless Hosted API. To run YOLOv5 inferencing, train a model elsewhere, upload your weights to a [Roboflow Project](/workspaces/key-concepts.md), and call its `model_id` and `version`.

## Code samples

Install the SDK and [supervision](https://supervision.roboflow.com/) for visualization:

```bash
pip install inference-sdk supervision opencv-python
```

Pass your [Roboflow API Key](https://app.roboflow.com/settings/api) via the `API_KEY` environment variable, and replace `model_id` with your workspace, project, and version.

### Object detection

```python
import os
import urllib.request

import cv2
import supervision as sv
from inference_sdk import InferenceHTTPClient

image_url = "https://storage.googleapis.com/com-roboflow-marketing/notebooks/examples/cars-highway.png"
image_path = "cars-highway.png"
urllib.request.urlretrieve(image_url, image_path)

image = cv2.imread(image_path)

client = InferenceHTTPClient(
    api_url="https://serverless.roboflow.com",
    api_key=os.getenv("API_KEY"),
)
result = client.infer(image, model_id="your-project/1")

detections = sv.Detections.from_inference(result)

box_annotator = sv.BoxAnnotator()
label_annotator = sv.LabelAnnotator()

annotated = box_annotator.annotate(scene=image.copy(), detections=detections)
annotated = label_annotator.annotate(scene=annotated, detections=detections)

cv2.imwrite("output.png", annotated)
```

### Instance segmentation

```python
import os
import urllib.request

import cv2
import supervision as sv
from inference_sdk import InferenceHTTPClient

image_url = "https://storage.googleapis.com/com-roboflow-marketing/notebooks/examples/bicycle.png"
image_path = "bicycle.png"
urllib.request.urlretrieve(image_url, image_path)

image = cv2.imread(image_path)

client = InferenceHTTPClient(
    api_url="https://serverless.roboflow.com",
    api_key=os.getenv("API_KEY"),
)
result = client.infer(image, model_id="your-project/1")

detections = sv.Detections.from_inference(result)

mask_annotator = sv.MaskAnnotator()
label_annotator = sv.LabelAnnotator()

annotated = mask_annotator.annotate(scene=image.copy(), detections=detections)
annotated = label_annotator.annotate(scene=annotated, detections=detections)

cv2.imwrite("output.png", annotated)
```

{% hint style="info" %}
Set `api_url` to match your deployment target:

* `https://serverless.roboflow.com` for the Serverless Hosted API.
* `http://localhost:9001` for a local [Inference](https://inference.roboflow.com/) server.
* Your [Dedicated Deployment](/deploy/dedicated-deployments.md) URL for a private endpoint.
  {% endhint %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.roboflow.com/deploy/supported-models/yolov5.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
