Roboflow Docs
DashboardForum
  • Build Vision Models with Roboflow
  • Quickstart
  • Roboflow Enterprise
  • Workspaces
    • Create a Workspace
    • Delete a Workspace
    • Add Team Members
    • Role-Based Access Control
  • Usage Based Pricing
  • Workflows
    • Create a Workflow
    • Build a Workflow
    • Test a Workflow
    • Deploy a Workflow
    • Workflow Examples
      • Multimodal Model Workflow
    • Share a Workflow
      • Workflow Sharing Configuration
    • Advance Workflow Topics
      • JSON Editor
  • Datasets
    • Create a Project
    • Upload Data
      • Import Data from Cloud Providers
        • AWS S3 Bucket
        • Azure Blob Storage
        • Google Cloud Storage
      • Upload Video
      • Import from Roboflow Universe
    • Manage Batches
    • Search a Dataset
    • Create a Dataset Version
    • Preprocess Images
    • Create Augmented Images
    • Add Tags to Images
    • Manage Classes
    • Edit Keypoint Skeletons
    • Create an Annotation Attribute
    • Export Versions
    • Dataset Analytics
    • Merge Projects
    • Delete an Image
    • Delete a Version
    • Delete a Project
    • Project Folders
  • Annotate
    • Annotation Tools
    • Use Roboflow Annotate
      • Annotate Keypoints
      • Label Assist (AI Labeling)
      • Enhanced Smart Polygon with SAM (AI Labeling)
      • Smart Polygon (AI Labeling)
      • Keyboard Shortcuts
      • Comment on an Image
      • Annotation History
      • Similarity Search
      • Box Prompting (AI Labeling)
    • Automated Annotation with Auto Label
    • Collaborate on Annotations
    • Annotation Insights
    • Labeling Best Practices
  • Train
    • Train a Model in Roboflow
      • Train from Scratch
      • Train from a Universe Checkpoint
      • Python Package
      • Roboflow Notebooks (GitHub)
    • Train from Azure Vision
    • Train from Google Cloud
    • View Training Results
    • Evaluate Trained Models
    • Custom Training Notebooks
  • Deploy
    • Deployment Overview
      • Roboflow Managed Deployments Overview
    • Serverless Hosted API
      • Object Detection
      • Classification
      • Instance Segmentation
        • Semantic Segmentation
      • Keypoint Detection
      • Foundation Models
        • CLIP
        • OCR
        • YOLO-World
      • Video Inference
        • Use a Fine-Tuned Model
        • Use CLIP
        • Use Gaze Detection
        • API Reference
        • Video Inference JSON Output Format
      • Pre-Trained Model APIs
        • Blur People API
        • OCR API
        • Logistics API
        • Image Tagging API
        • People Detection API
        • Fish Detection API
        • Bird Detection API
        • PPE Detection API
        • Barcode Detection API
        • License Plate Detection API
        • Ceramic Defect Detection API
        • Metal Defect Detection API
    • Serverless Hosted API V2
    • Dedicated Deployments
      • How to create a dedicated deployment (Roboflow App)
      • How to create a dedicated deployment (Roboflow CLI)
      • How to use a dedicated deployment
      • How to manage dedicated deployment using HTTP APIs
    • SDKs
      • Python inference-sdk
      • Web Browser
        • inferencejs Reference
        • inferencejs Requirements
      • Lens Studio
        • Changelog - Lens Studio
      • Mobile iOS
      • Luxonis OAK
    • Upload Custom Weights
    • Download Roboflow Model Weights
    • Enterprise Deployment
      • License Server
      • Offline Mode
      • Kubernetes
      • Docker Compose
    • Model Monitoring
      • Alerting
  • Roboflow CLI
    • Introduction
    • Installation and Authentication
    • Getting Help
    • Upload Dataset
    • Download Dataset
    • Run Inference
  • API Reference
    • Introduction
    • Python Package
    • REST API Structure
    • Authentication
    • Workspace and Project IDs
    • Workspaces
    • Workspace Image Query
    • Batches
    • Annotation Jobs
    • Projects
      • Initialize
      • Create
      • Project Folders API
    • Images
      • Upload Images
      • Image Details
      • Upload Dataset
      • Upload an Annotation
      • Search
      • Tags
    • Versions
      • View a Version
      • Create a Project Version
    • Inference
    • Export Data
    • Train a Model
    • Annotation Insights
      • Annotation Insights (Legacy Endpoint)
    • Model Monitoring
      • Custom Metadata
      • Inference Result Stats
  • Support
    • Share a Workspace with Support
    • Account Deletion
    • Frequently Asked Questions
Powered by GitBook
On this page
  • Accessing Model Monitoring
  • Workspace Dashboard
  • Model Dashboard
  • Inferences Table
  • Inference Details
  • Enabling Inference Images
  • Alerting
  • Custom Metadata
  • Model Monitoring API
  • Supported Deployments

Was this helpful?

  1. Deploy

Model Monitoring

A guide to Model Monitoring with Roboflow.

PreviousDocker ComposeNextAlerting

Last updated 2 months ago

Was this helpful?

Roboflow's Model Monitoring dashboard gives you unparalleled visibility into your models, from prototyping, all the way through production. With Model Monitoring, you can view high-level statistics to get insight into how your models are performing over time, or even view individual inference requests, to see how your models perform on edge cases.

Accessing Model Monitoring

Model Monitoring is only available for select plans. For the latest information, see our Pricing page

To view your Model Monitoring dashboard, click the "Monitoring" tab in your workspace.

Workspace Dashboard

Immediately, you will see three statistics pertaining to your models:

  • Total requests: The total number of inferences made to all models in your workspace

  • Average confidence: The average confidence across all predictions made by your models.

  • Average inference time: The average inference time across all inferences (The time in seconds it took to produce the predictions including image preprocessing)

The % change values are based on the current period vs the previous period. By default, these statistics will show your data for the last week. However, you can modify the time range using the buttons on top of the statistics.

The Models table shows all models that have inferences on them and clicking on them will take you to the Model Dashboard.

You can also access tabs for viewing Recent Inferences (across all models) and setting Alerts.

Model Dashboard

Under the Models tab, you can select a specific model to view its data. There, you'll see the same statistics as the Workspace Overview, but specific to one model.

Here, in addition to the statistics, you can view the number of detections for each class in the model, and see its distribution with respect to other classes.

Clicking on the "See All Inferences" button at the top right of the table will navigate you to the Inferences Table.

Inferences Table

Here, you can see all the prediction results for your model. In addition, you will also see any custom metadata that was added to your inferences. To view a subset of your inferences, you can use the filters on the top-right of the table.

Inference Details

From the Inferences Table, you have the ability to drill down into a specific inference and see more details. Let's break it down in the order shown in this image:

  1. Image: Here, you can see the image that was inferred. Note: This isn't enabled by default. See Enabling Inference Images

  2. Inference Details: On this panel, you can view all the details and properties about your inference request. All available fields are shown by default, but if you want to hide some, you can click the "Cog" icon in the top right corner to hide fields. (This setting will persist on your browser)

  3. On some fields, if available, there will be an option to search for inferences based on that field. On the highlighted example, it will search for inferences from the same model.

  4. Detections: This collapsable pane shows a list of detections received from that inference. You can click on the "Class" and "Confidence" table headers to choose the sort order of the table.

  5. Download & Link buttons: Here, you can download the image associated with the inference or copy a link to this Inference Details for later reference.

Enabling Inference Images

Images saved by Active Learning or Dataset Upload will count the same as uploading an image to your project. Credit, limit or quota usage may apply according to your plan type.

There are two ways to enable inference images to show up in Model Monitoring:

  • Roboflow Dataset Upload block: In Workflows, you can add a "Roboflow Dataset Upload" block. Once you hook up the predictions and prediction image, it will show up in Model Monitoring.

  • Active Learning (legacy): For legacy workspaces, you can enable "Active Learning" rules from your project's page:

Alerting

You and other members of your team can subscribe to real-time alerts when issues or anomalies occur with your model. For example, if the confidence of your model suddenly decreases, or your Inference Server goes down, and your model stops running, your team will receive an email notification.

See more info on the Alerting page:

Alerting

Custom Metadata

To attach additional metadata to an inference, you can use Model Monitoring's custom metadata feature. Using custom metadata, you can add information to an inference such as the location of where the image was taken, the expected value of the prediction, and so on. Your custom metadata will show up in the "Recent Inferences" and "All Inferences" views.

To attach custom metadata to an inference result, please see the Custom Metadata API documentation.

Model Monitoring API

For automation and integration into external systems, you can pull Model Monitoring statistics using our API for model monitoring.

Supported Deployments

Model Monitoring supports inference requests made using Roboflow's Hosted API or the Roboflow Inference Server, granted the Inference Server has internet access. This includes edge deployments which use Roboflow's License Server.

At this time, Model Monitoring does not support inference requests made using the Inference Pipeline, however, we plan to add support in the near future.