Roboflow Docs
DashboardResourcesProducts
  • Product Documentation
  • Developer Reference
  • Changelog
  • Roboflow Documentation
  • Quickstart
  • Workspaces
    • Workspaces, Projects, and Models
    • Create a Workspace
    • Rename a Workspace
    • Delete a Workspace
  • Team Members
    • Invite a Team Member
    • Role-Based Access Control (RBAC)
    • Change a Team Member Role
    • Remove a Team Member
  • Single Sign On (SSO)
  • Workflows
    • What is Workflows?
    • Create a Workflow
    • Build a Workflow
    • Test a Workflow
    • Deploy a Workflow
    • Workflow Examples
      • Multimodal Model Workflow
    • Share a Workflow
    • Workflows AI Assistant
  • Enterprise Integrations
  • Workflow Blocks
    • Run a Model
      • Object Detection Model
      • Single-Label Classification Model
    • Visualize Predictions
      • Bounding Box Visualization
      • Label Visualization
      • Circle Visualization
      • Background Color Visualization
      • Classification Label Visualization
      • Crop Visualization
  • Dataset Management
    • Create a Project
    • Upload Images, Videos, and Annotations
      • Import Data from Cloud Providers
        • AWS S3 Bucket
        • Azure Blob Storage
        • Google Cloud Storage
      • Import from Roboflow Universe
    • Manage Datasets
      • Dataset Batches
      • Search a Dataset
      • Set Dataset Classes
      • Add Tags to Images
      • Create an Annotation Attribute
      • Download an Image
      • Delete an Image
    • Dataset Versions
      • Create a Dataset Version
      • Preprocess Images
      • Augment Images
      • Delete a Version
      • Export a Dataset Version
    • Dataset Analytics
    • Merge Projects
    • Rename a Project
    • Delete a Project
    • Project Folders
    • Make a Project Public
  • Annotate
    • Introduction to Roboflow Annotate
    • Annotate an Image
      • Keyboard Shortcuts
      • Comment on an Image
      • Annotation History
      • Similarity Search
    • AI Labeling
      • Label Assist
      • Enhanced Smart Polygon with SAM
        • Smart Polygon (Legacy)
      • Box Prompting
      • Auto Label
    • Set Keypoint Skeletons
    • Annotate Keypoints
    • Annotate Multimodal Data
    • Collaborate on Labeling
    • Annotation Insights
  • Managed Labeling
  • Train
    • Train a Model
      • Train from a Universe Checkpoint
      • Train from Azure Vision
      • Train from Google Cloud
    • Roboflow Instant
    • Cancel a Training Job
    • Stop Training Early
    • View Training Results
    • View Trained Models
    • Evaluate Trained Models
  • Download a Dataset Version
  • Deploy
    • Deploy a Model or Workflow
    • Managed Deployments
    • Serverless Hosted API V2
      • Use in a Workflow
      • Use with the REST API
      • Run an Instant Model
    • Serverless Hosted API
      • Object Detection
      • Classification
      • Instance Segmentation
        • Semantic Segmentation
      • Keypoint Detection
      • Foundation Models
        • CLIP
        • OCR
        • YOLO-World
      • Video Inference
        • Use a Fine-Tuned Model
        • Use CLIP
        • Use Gaze Detection
        • API Reference
        • Video Inference JSON Output Format
      • Pre-Trained Model APIs
        • Blur People API
        • OCR API
        • Logistics API
        • Image Tagging API
        • People Detection API
        • Fish Detection API
        • Bird Detection API
        • PPE Detection API
        • Barcode Detection API
        • License Plate Detection API
        • Ceramic Defect Detection API
        • Metal Defect Detection API
    • Dedicated Deployments
      • Create a Dedicated Deployment
      • Make Requests to a Dedicated Deployment
      • Manage Dedicated Deployments with an API
    • Batch Processing
    • SDKs
      • Python inference-sdk
      • Web Browser
        • inferencejs Reference
        • inferencejs Requirements
      • Lens Studio
        • Changelog - Lens Studio
      • Luxonis OAK
    • Upload Custom Model Weights
    • Download Model Weights
    • Enterprise Deployment
      • License Server
      • Offline Mode
      • Kubernetes
      • Docker Compose
    • Monitor Deployed Models
      • Alerting
  • Universe
    • What is Roboflow Universe?
    • Find a Dataset on Universe
    • Explore Images in a Universe Dataset
    • Fork a Universe Dataset
    • Find a Model on Universe
    • Download a Universe Dataset
  • Set a Project Description
  • View Project Analytics
  • Support
    • Share a Workspace with Support
    • Delete Your Roboflow Account
    • Apply for Academic Credits
  • Billing
    • Premium Trial
    • Credits
      • View Credit Usage
      • Enable or Disable Flex Billing
      • Purchase Prepaid Credits
    • Plans
      • Purchase a Plan
      • Cancel a Plan
      • Update Billing Details
      • Update Payment Method
      • View Invoices
Powered by GitBook
On this page

Was this helpful?

  1. Deploy
  2. Serverless Hosted API
  3. Video Inference

Video Inference JSON Output Format

The output of the video inference process is a JSON file. This page explains its format.

File Structure

The JSON output file contains several lists of equal length. The index of each list corresponds to a specific frame of the video being inferred upon.

{
    frame_offsets: [...],
    time_offsets: [...],
    fine_tuned_model_id: [...],
    fine_tuned_model_id: [...],
    clip: [...],
    gaze: [...]
    ...
}

The frame_offset lists the video frame numbers that were extracted and inferred on each model. The list always starts at 0. For example, if the input video has 24 video frames-per-second, and we specify an infer_fps of 4 frames per second then the frame indices selected for inference (frame_offsets) will be [0, 6, 12, 18, 24, 30,...] It is best practice to select an infer_fps to be a factor of the video frame-rate. If the video frame rate is not a perfect multiple of the infer_fps then the frame_offset will be an approximation. Choose the minimum infer_fps that works for your application since higher values will result in greater cost and slower results. The system will not return output if infer_fps is greater than the video frame rate.

The time-offsets list indicate the time in the video playback when the frame occurs. Each time entry is in seconds, rounded to 4 decimal places.

The rest of the lists contain inference data. Each element of a list can be a dictionary or have a value None, in case that particular frame was not successfully inferred by the model.

In the next section, we elaborate on the results returned for different model types.

Object Detection

The example below illustrates one element of an object detection model's inference output list.

{
    "time": 0.06994929000006778, 
    "image": {"width": 480, "height": 360}, 
    "predictions": [
        {
            "x": 360.0, 
            "y": 114.0, 
            "width": 36.0, 
            "height": 104.0, 
            "confidence": 0.6243005394935608, 
            "class": "zebra", 
            "class_id": 1
        }
    ]
}

The time field is the inference computation time and can usually be ignored.

The image field shows the dimensions of the input.

The predictions list contains each predicted class' information.

Gaze Detection

The example below illustrates one element of the gaze detection model's inference output list.

{
    predictions: [
        {
            face: {
                x: 236,
                y: 208,
                width: 94,
                height: 94,
                confidence: 0.9232424,
                class: "face",
                class_confidence: null,
                class_id: 0,
                tracker_id: null
            },
            }
            landmarks: [
                {
                    x: 207,
                    y: 183
                },
                ... (6 landmarks)
            ]
            yaw: 0.82342350129345,
            pitch: 0.23152452412,      
        }
        ...
    ],
    time: 0.025234234,
    time_face_det: null,
    time_gaze_det: null
}

Classification

<coming soon!>

Instance Segmentation

<coming soon!>

PreviousAPI ReferenceNextPre-Trained Model APIs

Last updated 3 months ago

Was this helpful?