# YOLO-World

YOLO-World is an open-vocabulary object detection model that detects objects from arbitrary text class names without training. We support YOLO-World inferencing via our [Serverless Hosted API](/deploy/serverless-hosted-api-v2.md).

For more details on running YOLO-World, see the [Inference docs](https://inference.roboflow.com/).

## Code sample

Call the `/yolo_world/infer` endpoint directly with `curl`:

```bash
curl --location 'https://serverless.roboflow.com/yolo_world/infer' \
  --header 'Content-Type: application/json' \
  --data '{
    "api_key": "YOUR_API_KEY",
    "image": {"type": "url", "value": "https://storage.googleapis.com/com-roboflow-marketing/notebooks/examples/cars-highway.png"},
    "text": ["car", "truck"],
    "yolo_world_version_id": "v2-s",
    "confidence": 0.05
  }'
```

The same call through the SDK. Install it and [supervision](https://supervision.roboflow.com/):

```bash
pip install inference-sdk supervision
```

Run YOLO-World with custom class names, then decode and visualize the predictions with supervision. Pass [Roboflow's API Key](https://app.roboflow.com/settings/api) via the `API_KEY` env variable.

```python
import os
import urllib.request

import cv2
import supervision as sv
from inference_sdk import InferenceHTTPClient

image_url = "https://storage.googleapis.com/com-roboflow-marketing/notebooks/examples/cars-highway.png"
image_path = "cars-highway.png"
urllib.request.urlretrieve(image_url, image_path)

client = InferenceHTTPClient(
    api_url="https://serverless.roboflow.com",
    api_key=os.getenv("API_KEY"),
)

results = client.infer_from_yolo_world(
    inference_input=image_path,
    class_names=["car", "truck"],
    model_version="v2-s",
    confidence=0.05,
)

detections = sv.Detections.from_inference(results[0])

image = cv2.imread(image_path)
labels = [
    f"{name} {conf:.2f}"
    for name, conf in zip(detections.data["class_name"], detections.confidence)
]
annotated = sv.BoxAnnotator().annotate(scene=image.copy(), detections=detections)
annotated = sv.LabelAnnotator().annotate(scene=annotated, detections=detections, labels=labels)
cv2.imwrite("annotated.png", annotated)
```

<figure><img src="/files/VzSiFyFTGQC6sGlYMR4I" alt=""><figcaption></figcaption></figure>

The `class_names` argument accepts any list of class names. Available `model_version` values: `v2-s`, `v2-m`, `v2-l`, `v2-x`, `s`, `m`, `l`, `x`.

{% hint style="info" %}
Set `api_url` to match your deployment target:

* `https://serverless.roboflow.com` for the Serverless Hosted API.
* `http://localhost:9001` for a local [Inference](https://inference.roboflow.com/) server.
* Your [Dedicated Deployment](/deploy/dedicated-deployments.md) URL for a private endpoint.
  {% endhint %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.roboflow.com/deploy/supported-models/yolo-world.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
