Inference - Instance Segmentation
Leverage your custom trained model for cloud-hosted inference or edge deployment.
After training, your model will be available for you to integrate into your application.
There are multiple ways to use your trained model for inference, all of which are accessible with quick links in the web UI after your model has finished training:
Deployment options available after training
- A server hosted API endpoint is generated for your model. You can POST a base64-encoded image or pass an image URL as input via the query string. This method is device agnostic and sample code is provided in several languages.
Connect computer vision models to your business logic with our pre-made templates.