Evaluate Trained Models
Use Model Evaluation to explore how your model performs on your test dataset.
Last updated
Was this helpful?
Use Model Evaluation to explore how your model performs on your test dataset.
Last updated
Was this helpful?
Model evaluations show:
A confusion matrix, which you can use to find specific classes on which your model thrives and struggles, and;
An interactive vector explorer which lets you identify clusters of images where your model does well or poorly.
You can use model evaluation to identify areas of improvement for your model.
Model evaluations are automatically run for all models trained by paid users. It may take several minutes for an evaluation to run for a dataset of a few hundred images, and several hours for large datasets with thousands or more images.
To find the confusion matrix and vector explorer for your model, open any trained model version in your project. Then, click the "View Evaluation" button:
A window will open where you can view your confusion matrix and vector analysis.
Your confusion matrix shows how well your model performs on different classes.
Your confusion matrix is calculated by running images from your test and validation sets with your trained model. The results from your model are then compared with the "ground truth" from your dataset annotations.
With the confusion matrix tool, you can identify:
Classes where your model performs well.
Classes where your model identifies the wrong class for an object (false positives).
Instances where your model identifies an object where none is present (false negatives).
Here is an example confusion matrix:
By default, the confusion matrix shows how your model performs when run at 50% confidence. You can adjust the confidence threshold using the Confidence Threshold slider. Your confusion matrix, precision, and recall will update as you configure the slider.
You can click on each box in the confusion matrix to see what images appear in the corresponding category.
For example, you can click any box in the "False Positive" column to identify images where an object was identified where one was not present in your ground truth data.
You can click on an individual image to enter an interactive view where you can toggle between the ground truth (your annotations) and the model predictions:
Click "Ground Truth" to see your annotations and "Model Predictions" to see what your model returned.