Web Browser
Realtime predictions at the edge with roboflow.js
Last updated
Was this helpful?
Realtime predictions at the edge with roboflow.js
Last updated
Was this helpful?
For most business applications, the is suitable. But for many consumer applications and some enterprise use cases, having a server-hosted model is not workable (for example, if your users are bandwidth constrained or need lower latency than you can achieve using a remote API).
inferencejs
is a custom layer on top of to enable real-time inference via JavaScript using models trained on Roboflow.
Once you have a trained model, you can easily test it using your webcam using the "Try with Webcam" button.
The webcam demo is a sample app that is available for you to download and tinker with the "Get Code" link.
You can try out a webcam demo of a (it is trained on the public ).
We have published a "" project on Repl.it with an accompanying tutorial showing .
The Roboflow homepage uses inferencejs
to power the COCO inference widget. The README contains instructions on how to use the repository template to deploy a model to the web using GitHub Pages.
If you would like more details regarding specific functions in inferencejs
, check out our documentation page or click on any mention of a inferencejs
method in our guide below to be taken to the respective documentation.
To add inference
to your project, simply install using npm or add the script tag reference to your page's <head>
tag.
inferencejs
You can obtain your publishable_key
from the Roboflow workspace settings.
Note: your publishable_key
is used with inferencejs
, not your private API key (which should remain secret).
Start by importing InferenceEngine
and creating a new inference engine object
Now we can load models from roboflow using your publishable_key
and the model metadata (model name and version) along with configuration parameters like confidence threshold and overlap threshold.
inferencejs
will now start a worker that runs the chosen model. The returned worker id corresponds with the worker id in InferenceEngine
that we will use for inference. To infer on the model we can invoke the infer
method on the InferenceEngine
.
Let's load an image and infer on our worker.
This returns an array of predictions (as a class, in this case RFObjectDetectionPrediction
)
If you would like to customize and configure the way inferencejs
filters its predictions, you can pass parameters to the worker on creation.
Or you can pass configuration options on inference