Luxonis OAK (On Device)

Deploy your Roboflow Train model to your OpenCV AI Kit with Myriad X VPU acceleration.

About the Luxonis OAK

The Luxonis OAK (OpenCV AI Kit) is an edge device that is popularly used for the deployment of embedded computer vision systems.
OAK devices are paired with a host machine that drives the operation of the downstream application. For some exciting inspiration, see Luxonis's use cases and Roboflow's case studies.
By the way: if you don't have your OAK device yet, you can buy one via the Roboflow Store to get a 10% discount.

When should you use edge deployment?

Our Hosted API is suitable for most use-cases; the hosted API uses battle-tested infrastructure and seamlessly autoscales up and down to handle even the most intense use-cases.
But, because it is hosted remotely, there are some scenarios where it's not ideal: notably, in situations where bandwidth is constrained or where production data cannot extend beyond your local network or corporate firewall or where you need realtime inference speeds on the edge.
In those cases, an on-premise deployment is needed. And the OAK is a great choice because it is a standardized device that combines a camera with a built in hardware accelerator which frees up your host device to run your application code.

Supported Luxonis Devices and Host Requirements

The Roboflow Inference Server supports the following devices:
  • OAK-D
  • OAK-D-Lite
  • OAK-1 (no depth)

Setting up OpenVINO and DepthAI (Optional)

You may want to do this if you'd like to deploy blob files for models not trained using Roboflow Train. Check out our Knowledge Base for directions and more details on setting up OpenVINO and DepthAI.


You can develop against the Roboflow Hosted Inference API. It uses the same trained models as on-device inference.


There are a few options available for deployment, including Roboflow's Python package (roboflowoak), the Luxonis DepthAI SDK, and containerized Docker deployment.

Luxonis DepthAI SDK

roboflowoak Python package


If you are using Anaconda
Ensure that you have your environment set up correctly!
You can create an environment as follows:
conda create --name roboflowoak python=3.8
Then activate your environment:
conda activate roboflowoak
Install the roboflowoak, depthai, and opencv-python packages.
pip install roboflowoak
pip install depthai
pip install opencv-python
Then use the roboflowoak package to run your custom trained Roboflow model.
NOTE: For those having issues with installing depthai with pip on M1 Macs:
Specific comment that outlines the install process: Installation issue when using M1 Chip · Issue #299 · luxonis/depthai · GitHub
  • The “3 lines” referenced for deletion no longer exist in, but the “48 lines” to delete do exist. After deleting the 48 lines, be sure to “Write Out” (CTRL + O) and then save the changes.
Example Python Script for Running Inference with your Model
  1. 1.
    Copy/paste the script below into VSCode, XCode, PyCharm, Spyder (or another code editor)
  2. 2.
    Update the values for model/project [name], model version, api_key, and device_name within the "rf" object.
    • Locate the Roboflow API Key for your workspace. You'll need it to use the package. It can be found in your workspace settings.
  3. 3.
    Save the python file to a directory - be sure to note the directory name and file name as we'll need these later for the deployment to work.
Each API Key is tied to a specific workspace; treat it like a password and keep it private because your API Key can be used to access and modify the data in your workspace.
Obtain an API Key from your Workspace's settings.

Running Inference: Deployment

  • If you are deploying to an OAK device without Depth capabilities, set depth=False when instantiating (creating) the rf object. OAK's with Depth have a "D" attached to the model name, i.e OAK-D and OAK-D-Lite.
    • also be sure to comment out max_depth = np.amax(depth) and cv2.imshow("depth", depth/max_depth)
from roboflowoak import RoboflowOak
import cv2
import time
import numpy as np
if __name__ == '__main__':
# instantiating an object (rf) with the RoboflowOak module
# API Key:
rf = RoboflowOak(model="YOUR-MODEL-ID", confidence=0.05, overlap=0.5,
version="YOUR-MODEL-VERSION-#", api_key="YOUR-PRIVATE_API_KEY", rgb=True,
depth=True, device=None, blocking=True)
# Running our model and displaying the video output with detections
while True:
t0 = time.time()
# The rf.detect() function runs the model inference
result, frame, raw_frame, depth = rf.detect()
predictions = result["predictions"]
# predictions:
# [ {
# x: (middle),
# y:(middle),
# width:
# height:
# depth: ###->
# confidence:
# class:
# mask: {
# ]
#frame - frame after preprocs, with predictions
#raw_frame - original frame from your OAK
#depth - depth map for raw_frame, center-rectified to the center camera
# timing: for benchmarking purposes
t = time.time()-t0
print("FPS ", 1/t)
print("PREDICTIONS ", [p.json() for p in predictions])
# setting parameters for depth calculation
# comment out the following 2 lines out if you're using an OAK without Depth
max_depth = np.amax(depth)
cv2.imshow("depth", depth/max_depth)
# displaying the video feed as successive frames
cv2.imshow("frame", frame)
# how to close the OAK inference window / stop inference: CTRL+q or CTRL+c
if cv2.waitKey(1) == ord('q'):
Enter the code below (after replacing the placeholder text with the path to your Python script)
# To close the window (interrupt or end inference), enter CTRL+c on your keyboard
python3 /path/to/[YOUR-PYTHON-FILE].py
The inference speed (in milliseconds) with the Apple Macbook Air 13" (M1) as the host device averaged around 15 ms, or 66 FPS.
Note: The host device used with OAK will drastically impact FPS. Take this into consideration when creating your system.
Face Detection model running on the OAK.


If you are experiencing issues setting up your OAK device, visit Luxonis's installation instructions and be sure that you can run the RGB example successfully on the Luxonis installation. You can also post for help on the Roboflow Forum.