Inference - Object Detection
Leverage your custom trained model for cloud-hosted inference or edge deployment.
After training, your model will be available for you to integrate into your application
There are multiple ways to use your trained model for inference, all of which are accessible with quick links in the web UI after your model has finished training:
Deployment options available after training
- A server hosted API endpoint is generated for your model. You can POST a base64-encoded image or pass an image URL as input via the query string. This method is device agnostic and sample code is provided in several languages. You will be able to make predictions from any device with an Internet connection and scaling the server infrastructure is fully managed for you; you are billed based on the number of predictions your model makes each month.
- You can use an edge computing device like an NVIDIA Jetson as a drop-in API-compatible replacement for the hosted API.
- You can also use roboflow.js to make predictions in a web browser running on-device. This is useful for applications that need to make realtime predictions or may operate in a bandwidth constrained environment. On-device inference is billed based on the number of unique devices used per month.
To use your model on videos, you can use our open source video inference utility (it works with both our Hosted Inference API and Jetson edge server):
You can easily send data from your device's webcam to roboflow.js via your web browser or to our Hosted API by using our Python webcam inference example code:
Connect computer vision models to your business logic with our pre-made templates.