Serverless Hosted API
Run Workflows and Model Inference on GPU-accelerated auto-scaling infrastructure in the Roboflow cloud.
Inference server
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
# api_url="http://localhost:9001" # Self-hosted Inference server
api_url="https://serverless.roboflow.com", # Our Serverless hosted API
api_key="API_KEY" # optional to access your private models and data
)
result = CLIENT.infer("image.jpg", model_id="model-id/1")
print(result)Limits
Last updated
Was this helpful?