Web Browser
Realtime predictions at the edge with roboflow.js
For most business applications, the Hosted API is suitable. But for many consumer applications and some enterprise use cases, having a server-hosted model is not workable (for example, if your users are bandwidth constrained or need lower latency than you can achieve using a remote API).
inferencejs
is a custom layer on top of Tensorflow.js to enable real-time inference via JavaScript using models trained on Roboflow.
Learning Resources
Try Your Model With a Webcam
Once you have a trained model, you can easily test it using your webcam using the "Try with Webcam" button.
The webcam demo is a sample app that is available for you to download and tinker with the "Get Code" link.
You can try out a webcam demo of a hand-detector model here (it is trained on the public EgoHands dataset).
Interactive Replit Environment
We have published a "Getting Started" project on Repl.it with an accompanying tutorial showing how to deploy YOLOv8 models using our Repl.it template.
GitHub Template
The Roboflow homepage uses inferencejs
to power the COCO inference widget. The README contains instructions on how to use the repository template to deploy a model to the web using GitHub Pages.
Documentation
If you would like more details regarding specific functions in inferencejs
, check out our documentation page or click on any mention of a inferencejs
method in our guide below to be taken to the respective documentation.
Installation
To add inference
to your project, simply install using npm or add the script tag reference to your page's <head>
tag.
Initialization
You can obtain your publishable_key
from the Roboflow API settings page. You can get it by going into your workspace, going into Settings, in the Roboflow API page, and getting your Publishable API Key.
Your model ID and version number are located in the URL of the dataset version page (where you started training and see your results).
Note: your publishable_key
is used with inferencejs
, not your API key (which should remain secret).
inferencejs
uses webworkers so that multiple models can be used without blocking the main UI thread. Each model is loaded through the InferenceEngine
our webworker manager that abstracts the necessary thread management for you. Start by importing InferenceEngine
and creating a new inference engine object
Now we can load models from roboflow using your publishable_key
and the model metadata (model name and version) along with configuration parameters like confidence threshold and overlap threshold.
inferencejs
will now start a worker that runs the chosen model. The returned worker id corresponds with the worker id in InferenceEngine
that we will use for inference. To infer on the model we can invoke the infer
method on the InferenceEngine
. But first we need an input image. We proved the CVImage
wrapper class which can take in a variety of image formats (HTMLImageElement
, HTMLVideoElement
, ImageBitmap
, or TFJS Tensor
). Let's load an image and infer on our worker.
This returns an array of predictions, in this case RFObjectDetectionPrediction
Configuration
If you would like to customize and configure the way inferencejs
filters its predictions, you can pass parameters to the worker on creation.
Or you can pass configuration options on inference
Last updated