Our one-click training solution will give you a state of the art model hosted at an API endpoint customized for your dataset in no time.
Roboflow offers an AutoML product called Roboflow Train. It is the easiest way to train and deploy a state of the art object detection model on your custom dataset. It's literally one click -- we'll do the rest. When your model is done training, you'll receive access to a hosted inference API to interrogate your model for predictions via your programming language of choice (or a simple demo web app), a Tensorflow JS model you can embed in your web application, and an on-device inference server you can run on edge devices like the NVIDIA Jetson.
Choose image preprocessing and image augmentation settings, then generate a version of your dataset. Click "Use Roboflow Train" then "Start Training" and we will train a model for you and return back an API you can use.
The default image resize is 416x416. If you select a "Resize" preprocessing option, we will train a model whose native input size is similar to your training data size.
To make your model faster, try exporting images in a smaller size.
To make it more accurate, try a larger size.
The maximum size we support is 1024x1024 -- which is quite slow to train and conduct inference against. If you're not sure what to pick, we recommend starting with 416x416.
The larger your dataset and the larger your outputted images, the longer it will take for your dataset to train. We will email you when it's finished. In most cases, this should be under 24 hours.
When your model has finished training, you can see the metrics on the dataset version page including mean average precision, precision, recall, and more.
You can also see how your model performed on the specific images in your test and validation sets by using the "Review Inference Examples" feature to get a subjective feel for what these training metrics mean on your particular dataset.
Deploying Your Model
After training, your model is ready to be used for inference and embedded in a custom application! See the Inference Documentation page for all the options.