Roboflow Docs
DashboardResourcesProducts
  • Product Documentation
  • Developer Reference
  • Changelog
  • Roboflow Documentation
  • Quickstart
  • Workspaces
    • Workspaces, Projects, and Models
    • Create a Workspace
    • Rename a Workspace
    • Delete a Workspace
  • Team Members
    • Invite a Team Member
    • Role-Based Access Control (RBAC)
    • Change a Team Member Role
    • Remove a Team Member
  • Single Sign On (SSO)
  • Workflows
    • What is Workflows?
    • Create a Workflow
    • Build a Workflow
    • Test a Workflow
    • Deploy a Workflow
    • Workflow Examples
      • Multimodal Model Workflow
    • Share a Workflow
    • Workflows AI Assistant
  • Enterprise Integrations
  • Workflow Blocks
    • Run a Model
      • Object Detection Model
      • Single-Label Classification Model
    • Visualize Predictions
      • Bounding Box Visualization
      • Label Visualization
      • Circle Visualization
      • Background Color Visualization
      • Classification Label Visualization
      • Crop Visualization
  • Dataset Management
    • Create a Project
    • Upload Images, Videos, and Annotations
      • Import Data from Cloud Providers
        • AWS S3 Bucket
        • Azure Blob Storage
        • Google Cloud Storage
      • Import from Roboflow Universe
    • Manage Datasets
      • Dataset Batches
      • Search a Dataset
      • Set Dataset Classes
      • Add Tags to Images
      • Create an Annotation Attribute
      • Download an Image
      • Delete an Image
    • Dataset Versions
      • Create a Dataset Version
      • Preprocess Images
      • Image Augmentation
        • Augmentation Types
          • Flip Augmentation
          • 90º Rotate Augmentation
          • Crop Augmentation
          • Rotation Augmentation
          • Shear Augmentation
          • Grayscale Augmentation
          • Hue Augmentation
          • Saturation Augmentation
          • Brightness Augmentation
          • Exposure Augmentation
          • Blur Augmentation
          • Noise Augmentation
          • Cutout Augmentation
          • Mosaic Augmentation
        • Add Augmentations to Images
      • Delete a Version
    • Dataset Analytics
    • Merge Projects
    • Rename a Project
    • Delete a Project
    • Project Folders
    • Make a Project Public
    • Download a Dataset
  • Annotate
    • Introduction to Roboflow Annotate
    • Annotate an Image
      • Keyboard Shortcuts
      • Comment on an Image
      • Annotation History
      • Similarity Search
    • AI Labeling
      • Label Assist
      • Smart Polygon
      • Box Prompting
      • Auto Label
    • Set Keypoint Skeletons
    • Annotate Keypoints
    • Annotate Multimodal Data
    • Collaborate on Labeling
    • Annotation Insights
  • Roboflow Labeling Services
  • Train
    • Train a Model
      • Train from a Universe Checkpoint
      • Train from Azure Vision
      • Train from Google Cloud
    • Roboflow Instant
    • Cancel a Training Job
    • Stop Training Early
    • View Training Results
    • View Trained Models
    • Evaluate Trained Models
  • Deploy
    • Deploy a Model or Workflow
    • Supported Models
    • Managed Deployments
    • Serverless Hosted API V2
      • Use in a Workflow
      • Use with the REST API
      • Run an Instant Model
    • Serverless Hosted API
      • Object Detection
      • Classification
      • Instance Segmentation
        • Semantic Segmentation
      • Keypoint Detection
      • Foundation Models
        • CLIP
        • OCR
        • YOLO-World
      • Video Inference
        • Use a Fine-Tuned Model
        • Use CLIP
        • Use Gaze Detection
        • API Reference
        • Video Inference JSON Output Format
      • Pre-Trained Model APIs
        • Blur People API
        • OCR API
        • Logistics API
        • Image Tagging API
        • People Detection API
        • Fish Detection API
        • Bird Detection API
        • PPE Detection API
        • Barcode Detection API
        • License Plate Detection API
        • Ceramic Defect Detection API
        • Metal Defect Detection API
    • Dedicated Deployments
      • Create a Dedicated Deployment
      • Make Requests to a Dedicated Deployment
      • Manage Dedicated Deployments with an API
    • Batch Processing
    • SDKs
      • Python inference-sdk
      • Web Browser
        • inferencejs Reference
        • inferencejs Requirements
      • Lens Studio
        • Changelog - Lens Studio
      • Luxonis OAK
    • Upload Custom Model Weights
    • Download Model Weights
    • Enterprise Deployment
      • License Server
      • Offline Mode
      • Kubernetes
      • Docker Compose
    • Model Monitoring
      • Alerting
  • Universe
    • What is Roboflow Universe?
    • Find a Dataset on Universe
    • Explore Images in a Universe Dataset
    • Fork a Universe Dataset
    • Find a Model on Universe
    • Download a Universe Dataset
  • Set a Project Description
  • View Project Analytics
  • Support
    • Share a Workspace with Support
    • Delete Your Roboflow Account
    • Apply for Academic Credits
  • Billing
    • Premium Trial
    • Credits
      • View Credit Usage
      • Enable or Disable Flex Billing
      • Purchase Prepaid Credits
    • Plans
      • Purchase a Plan
      • Cancel a Plan
      • Update Billing Details
      • Update Payment Method
      • View Invoices
Powered by GitBook
On this page
  • Learning Resources
  • Installation
  • Initalizing inferencejs
  • Authenticating
  • Configuration

Was this helpful?

  1. Deploy
  2. SDKs

Web Browser

Realtime predictions at the edge with roboflow.js

PreviousPython inference-sdkNextinferencejs Reference

Last updated 26 days ago

Was this helpful?

For most business applications, the is suitable. But for many consumer applications and some enterprise use cases, having a server-hosted model is not workable (for example, if your users are bandwidth constrained or need lower latency than you can achieve using a remote API).

inferencejs is a custom layer on top of to enable real-time inference via JavaScript using models trained on Roboflow.

See the inferencejs reference

Learning Resources

Try Your Model With a Webcam

Once you have a trained model, you can easily test it using your webcam using the "Try with Webcam" button.

The webcam demo is a sample app that is available for you to download and tinker with the "Get Code" link.

You can try out a webcam demo of a (it is trained on the public ).

Interactive Replit Environment

We have published a "" project on Repl.it with an accompanying tutorial showing .

GitHub Template

The Roboflow homepage uses inferencejs to power the COCO inference widget. The README contains instructions on how to use the repository template to deploy a model to the web using GitHub Pages.

Documentation

If you would like more details regarding specific functions in inferencejs, check out our or click on any mention of a inferencejs method in our guide below to be taken to the respective documentation.

Installation

To add inference to your project, simply install using npm or add the script tag reference to your page's <head> tag.

npm install inferencejs
<script src="https://cdn.jsdelivr.net/npm/inferencejs"></script>

Initalizing inferencejs

Authenticating

You can obtain your publishable_key from the Roboflow workspace settings.

Note: your publishable_key is used with inferencejs, not your private API key (which should remain secret).

Start by importing InferenceEngine and creating a new inference engine object

inferencejs uses webworkers so that multiple models can be used without blocking the main UI thread. Each model is loaded through the InferenceEngine our webworker manager that abstracts the necessary thread management for you.

import { InferenceEngine } from "inferencejs";
const inferEngine = new InferenceEngine();

Now we can load models from roboflow using your publishable_key and the model metadata (model name and version) along with configuration parameters like confidence threshold and overlap threshold.

const workerId = await inferEngine.startWorker("[model name]", "[version]", "[publishable key]");

inferencejs will now start a worker that runs the chosen model. The returned worker id corresponds with the worker id in InferenceEngine that we will use for inference. To infer on the model we can invoke the infer method on the InferenceEngine.

Let's load an image and infer on our worker.

const image = document.getElementById("image"); // get image element with id `image`
const predictions = await inferEngine.infer(workerId, image); // infer on image

This can take in a variety of image formats (HTMLImageElement, HTMLVideoElement, ImageBitmap, or TFJS Tensor).

This returns an array of predictions (as a class, in this case RFObjectDetectionPrediction )

Configuration

If you would like to customize and configure the way inferencejs filters its predictions, you can pass parameters to the worker on creation.

const configuration = {scoreThreshold: 0.5, iouThreshold: 0.5, maxNumBoxes: 20};
const workerId = await inferEngine.startWorker("[model name]", "[version]", "[publishable key]", configuration);

Or you can pass configuration options on inference

const configuration = {
    scoreThreshold: 0.5, 
    iouThreshold: 0.5, 
    maxNumBoxes: 20
};
const predictions = await inferEngine.infer(workerId, image, configuration);

Hosted API
Tensorflow.js
here
hand-detector model here
EgoHands dataset
Getting Started
how to deploy YOLOv8 models using our Repl.it template
documentation page