Metrics Explorer in Model Evaluation

Model Evaluations for models trained on or uploaded to Roboflow now have a Metrics Explorer. This section lets you see how the precision, recall, and F1 score when validated against your dataset validation or test set changes as you set different confidence thresholds.
The Metrics Explorer will calculate an "Optimal Confidence" level given your dataset. You can move the line on the metrics explorer graph to see how your precision, recall, and F1 scores change at different confidence intervals.
If you trained your model prior to July 14th, 2025, you may have to click a button to toggle a new Model Evaluation calculation. This is necessary so Roboflow can calculate the statistics necessary to show your Metrics Explorer.
Last updated
Was this helpful?