Web Browser
Realtime predictions at the edge with roboflow.js
For most business applications, the Hosted API is suitable. But for many consumer applications and some enterprise use cases, having a server-hosted model is not workable (for example, if your users are bandwidth constrained or need lower latency than you can achieve using a remote API).
inferencejs
is a custom layer on top of Tensorflow.js to enable real-time inference via JavaScript using models trained on Roboflow.
Learning Resources
Try Your Model With a Webcam
Once you have a trained model, you can easily test it using your webcam using the "Try with Webcam" button.
The webcam demo is a sample app that is available for you to download and tinker with the "Get Code" link.
You can try out a webcam demo of a hand-detector model here (it is trained on the public EgoHands dataset).
Interactive Replit Environment
We have published a "Getting Started" project on Repl.it with an accompanying tutorial showing how to deploy YOLOv8 models using our Repl.it template.
GitHub Template
The Roboflow homepage uses inferencejs
to power the COCO inference widget. The README contains instructions on how to use the repository template to deploy a model to the web using GitHub Pages.
Documentation
If you would like more details regarding specific functions in inferencejs
, check out our documentation page or click on any mention of a inferencejs
method in our guide below to be taken to the respective documentation.
Installation
To add inference
to your project, simply install using npm or add the script tag reference to your page's <head>
tag.
Initalizing inferencejs
inferencejs
Authenticating
You can obtain your publishable_key
from the Roboflow workspace settings.
Note: your publishable_key
is used with inferencejs
, not your private API key (which should remain secret).
Start by importing InferenceEngine
and creating a new inference engine object
Now we can load models from roboflow using your publishable_key
and the model metadata (model name and version) along with configuration parameters like confidence threshold and overlap threshold.
inferencejs
will now start a worker that runs the chosen model. The returned worker id corresponds with the worker id in InferenceEngine
that we will use for inference. To infer on the model we can invoke the infer
method on the InferenceEngine
.
Let's load an image and infer on our worker.
This returns an array of predictions (as a class, in this case RFObjectDetectionPrediction
)
Configuration
If you would like to customize and configure the way inferencejs
filters its predictions, you can pass parameters to the worker on creation.
Or you can pass configuration options on inference
Last updated
Was this helpful?