YOLO-World
YOLO-World는 원하는 항목을 설명만 하면 별도의 학습 없이 객체 감지가 가능한 제로샷 객체 감지 모델입니다.
YOLO-World를 로컬에서 실행할 수도 있습니다 Inference는 저희 오픈 소스 추론 서버입니다.
API Reference
저희가 호스팅하는 API의 기본 URL은 다음과 같습니다 https://infer.roboflow.com
.
Run the YOLO-World zero-shot object detection model.
Roboflow API Key that will be passed to the model during initialization for artifact retrieval
Request for Grounding DINO zero-shot predictions.
Attributes: text (List[str]): A list of strings.
Roboflow API Key that will be passed to the model during initialization for artifact retrieval
true
The type of the model, usually referring to what task the model performs
object-detection
If true, the auto orient preprocessing step is disabled for this call.
false
If true, the auto contrast preprocessing step is disabled for this call.
false
If true, the grayscale preprocessing step is disabled for this call.
false
If true, the static crop preprocessing step is disabled for this call.
false
A list of strings
["person","dog","cat"]
l
0.4
Successful Response
Validation Error
POST /yolo_world/infer HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 431
{
"id": "text",
"api_key": "text",
"usage_billable": true,
"start": 1,
"source": "text",
"source_info": "text",
"model_id": "text",
"model_type": "object-detection",
"image": [
{
"type": "url",
"value": "http://www.example-image-url.com"
}
],
"disable_preproc_auto_orient": false,
"disable_preproc_contrast": false,
"disable_preproc_grayscale": false,
"disable_preproc_static_crop": false,
"text": [
"person",
"dog",
"cat"
],
"yolo_world_version_id": "l",
"confidence": 0.4
}
{
"visualization": "text",
"inference_id": "text",
"frame_id": 1,
"time": 1,
"image": [
{
"width": 1,
"height": 1
}
],
"predictions": [
{
"x": 1,
"y": 1,
"width": 1,
"height": 1,
"confidence": 1,
"class": "text",
"class_confidence": 1,
"class_id": 1,
"tracker_id": 1,
"detection_id": "text",
"parent_id": "text"
}
]
}
Last updated
Was this helpful?