Luxonis OAK (On Device)

Deploy your Roboflow Train model to your OpenCV AI Kit with Myriad X VPU acceleration.

About the Luxonis OAK

The Luxonis OAK (OpenCV AI Kit) is an edge device that is popularly used for the deployment of embedded computer vision systems.

OAK devices are paired with a host machine that drives the operation of the downstream application. For some exciting inspiration, see Luxonis's use cases and Roboflow's case studies.

About the Roboflow Inference Server

The Roboflow Inference Server is a drop-in replacement for the Hosted Inference API that can be deployed on your own hardware.

The main difference with the OAK deployment target is that, since the OAK devices come with a built-in camera, the input for your model comes directly from the camera instead of being POSTed via API.

We have optimized your model to get maximum performance from the Luxonis OAK devices by tailoring the training, conversion, dependencies, and software specifically to the hardware.

When should you use edge deployment?

Our Hosted API is suitable for most use-cases; the hosted API uses battle-tested infrastructure and seamlessly autoscales up and down to handle even the most intense use-cases.

But, because it is hosted remotely, there are some scenarios where it's not ideal: notably, in situations where bandwidth is constrained or where production data cannot extend beyond your local network or corporate firewall or where you need realtime inference speeds on the edge.

In those cases, an on-premise deployment is needed. And the OAK is a great choice because it is a standardized device that combines a camera with a built in hardware accelerator which frees up your host device to run your application code.

Supported Luxonis Devices and Host Requirements

The Roboflow Inference Server supports the following devices:

  • DepthAI OAK-D (LUX-D)

  • Luxonis OAK-1 (LUX-1)

The host system requires a linux/amd64 processor. arm65/aarch64 support is coming soon.

Inference Speed

In our tests, we observed an inference speed of 20FPS at 416x416 resolution, suitable for most realtime applications. This speed will vary slightly based on your host machine.

Prototyping

It is best practice to develop against the Roboflow Hosted Inference API. It uses the same trained models and returns same predictions as on-device inference while allowing for much quicker iteration cycles.

Switching over when you're ready to go to production is a simple change (replacing your infer.roboflow.com POST request with a GET request to localhost:9001).

Installation

Training

To run the Inference Server on your OAK, you must first have a trained model from Roboflow Train, our single-click training and deployment solution. Roboflow Train will create your computer vision model using cutting-edge modeling techniques and prepare your model for deployment to the edge through an optimized conversion process.

When gathering your dataset for training, it is extremely important to use images that are similar to your deployment environment.

The best course of action is to train on images taken from your OAK device. You can automatically upload images to Roboflow from your OAK for annotation via the Roboflow Upload API.

Testing

After Training, test your model on the Hosted Inference API to make sure it is performing properly. On your dataset page, click get curl command to receive an endpoint and an access_token, save these for deployment to the OAK device.

Serving

Once you have validated your model, you are ready to deploy to your Luxonis device.

Note: the Roboflow OAK Inference Server is currently only supported via Linux host systems running an amd64 architecture. Support for ARM hosts is coming soon.

Run the server:

  1. Connect your OAK device to the USB port

  2. sudo docker pull roboflow/oak-inference-server:latest

  3. Run the server with the following command:

sudo docker run --rm \
--privileged \
-v /dev/bus/usb:/dev/bus/usb \
--device-cgroup-rule='c 189:* rmw' \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-p 9001:9001 \
roboflow/oak-inference-server:latest

Use the server:

  1. Validate that you've stood up the OAK correctly by visiting in your browser http://localhost:9001/validate

  2. Validate that inference is working by invoking the pre-trained COCO model that comes built-in http://localhost:9001/coco

  3. Invoke your custom model by get request tohttp://localhost:9001/[YOUR_ENDPOINT]?access_token=[YOUR_ACCESS_TOKEN]

The first time you invoke your model, it will be downloaded and initialized into the device's memory which may take a few seconds. Subsequent calls will infer much faster.

get
Using the OAK Inference API

http://localhost:9001/:model-endpoint
Receive a prediction from your model on the device's current camera frame.
Request
Response
Request
Path Parameters
model-endpoint
optional
string
The unique identifier for your model. The easiest way to determine this is via the web UI's "Get curl command" link.
Query Parameters
classes
optional
string
Restrict the predictions to only those of certain classes. Provide as a comma-separated string. Example: dog,cat Default: not present (show all classes)
overlap
optional
number
The maximum percentage (on a scale of 0-100) that bounding box predictions of the same class are allowed to overlap before being combined into a single box. Default: 30
confidence
optional
number
A threshold for the returned predictions on a scale of 0-100. A lower number will return more predictions. A higher number will return fewer high-certainty predictions. Default: 40
format
optional
string
json - returns an array of JSON predictions. (See response format tab). image - returns an image with annotated predictions as a binary blob with a Content-Type of image/jpeg. Default: json
access_token
required
string
Your API key (obtained via your account page)
Response
200: OK
You will receive a json or image response based on the value of the "format" parameter
{
'predictions': [
{
'x': 95.0,
'y': 179.0,
'width': 190,
'height': 348,
'class':
'mask',
'confidence': 0.35
}
]
}
400: Bad Request
You may receive various 400 errors for malformed requests.
no access_token, you must pass access_token as a query parameter

Troubleshooting

If you are experiencing issues setting up your OAK device, visit Luxonis's installation instructions and be sure that you can run the RGB example successfully on the Luxonis Docker installation.