Documentation - roboflow.js

Learn more about the methods, variables and usage for roboflow.js

Here's more info about the methods and variables `roboflow.js` has and how to use them so you can get to working with it.

roboflow

The main roboflow variable is available globally to the window when you install the roboflow.js package using our script tag.

roboflow.auth()

This method is used for authenticating the usage of your model on the webpage.

It takes an object with the only property being publishable_key which is your Roboflow API's publishable key. You can get it by going into your workspace, going into Settings, in the Roboflow API page, and getting your Publishable API Key.

roboflow.auth({
    publishable_key: "<< YOUR PUBLISHABLE KEY >>"
})

This method returns the roboflow variable. So, you can chain requests like roboflow.load() like this roboflow.auth().load()

roboflow.load()

This method is used for loading in a specific model. You must run roboflow.auth() before you run this.

It takes an object with two required properties, model and version. You can find your model and version info in the respective version of your dataset. If you see something like my-model/1, my-model would be the model name and 1 would be the version.

The object takes another optional property onMetadata, which will take a function and acts as a listener. It will provide the same information as the model.getMetadata() function.

roboflow.load({
    model: "<< YOUR MODEL ID >>",
    version: 1 // <--- YOUR VERSION NUMBER
})

This method returns a promise with the model variable.

model

This variable is returned by roboflow.load() as a promise. The primary method you'll be using is the model.detect() method that runs and returns inference on your model.

var model = await roboflow.load({
    model: "<< YOUR MODEL ID >>",
    version: 1 // <--- YOUR VERSION NUMBER
})

model.configure()

This method is used for configuring the settings behind the inference results that are returned. It is optional, but a recommended step.

It takes an object with three optional properties:

  • threshold - (float; 0-1; default 0.5) the minimum confidence required to return a box. The higher you set this value, the less false positives you'll receive (but also the more false negatives).

  • overlap - (float; 0-1; default 0.5) the maximum area two boxes of the same class are allowed to overlap before the least confident one is recognized as a duplicate and dropped.

  • max_objects - (int, 1-Infinity; default 20) the maximum number of boxes to return.

model.configure({
    threshold: 0.5,
    overlap: 0.5,
    max_objects: 20
});

This method returns the model variable.

This method is used for getting the configuration.

model.getConfiguration()

This method returns an object. All the properties of the returned object should match the configuration object in model.configure() other than overlap which is represented as nms_threshold in the returned object.

model.getMetadata()

This method is used for getting the model metadata.

model.getMetadata()

This method returns an object with the following properties:

  • threshold - (float; 0-1; default 0.5) the minimum confidence required to return a box. The higher you set this value, the less false positives you'll receive (but also the more false negatives).

  • name - (string) the name of the project from which the model is from.

  • type - (string) type of project/model that is loaded in.

  • classes - (array) a class list in the form of an array of strings for each class in the model.

  • annotation - (string) the annotation group for the project.

  • size - (integer) the input size of the model, set from the preprocessing settings.

model.detect()

As the key functionality for the package, this method runs inference on a given image and returns the respective predictions for the image.

// Using `.then` callbacks
model.detect(img).then(function(predictions) {
    console.log("Predictions:", predictions);
});

// Using await
var predictions = await model.detect()
console.log("Predictions:", predictions);

It passes a given input to the model. The following input formats are accepted in roboflow.js:

  • <img>: A image element

  • <video>: A video element.

    • Note: when a video element is passed to the model, it infers on the frame that the video element is on at the moment it's called. It does not automatically continuously infer on a video, but it is possible to do so.

  • <canvas>: A HTML canvas element

Note: All the inputs provided must have its CORS permissions allow reading its pixel data.

Prediction Format

The method returns the result of the model, given in an array of predictions, each with the following properties:

  • class: (string) the name of the prediction; this will match the one you chose during dataset creation.

  • bbox: (object) an object describing the predicted bounding box containing the following properties (each a float representing a number of pixels):

    • x - the x-coordinate of the center point.

    • y - the y-coordinate of the center point.

    • width - the width of the bounding box.

    • height - the height of the bounding box.

  • confidence: (float, 0-1) the certainty of the model in its prediction; a higher number means the model is more confident.

  • color: (string) a hex string representing the color of the bounding box (matching the color displayed in your dataset's annotation visualizations in Roboflow).

Example Prediction

[
    {
         "class": "hard-hat",
         "bbox": {
            x: 289.8,
            y: 344.9,
            width: 193.5,
            height: 239.5
        },
        "confidence": 0.8083,
        "color": "#F4004E"
    },
    {
        "class": "hard-hat",
        "bbox": {
            x: 491.7,
            y: 93.9,
            width: 147.3,
            height: 134.3
        },
        "confidence": 0.6982,
        "color": "#F4004E"
    }
]

model.teardown()

This method destroys the model and cleans up the relevant and related resources.

model.teardown()

This method does not take any arguments or return anything.

Last updated