Offline Mode
Roboflow Enterprise customers can deploy models offline.
Last updated
Was this helpful?
Roboflow Enterprise customers can deploy models offline.
Last updated
Was this helpful?
Roboflow Enterprise customers can configure Roboflow Inference, our on-device inference server, to cache weights for up to 30 days.
This allows your model to run completely air-gapped or in locations where an Internet connection is not readily available.
To run your model offline, you need to:
Create and attach a Docker volume to /tmp/cache
on the your Inference Server.
Start a Roboflow Inference server with Docker.
Make a request to your model through the server, which will initiate the model weight download and cache process. You will need an internet connection for this step.
Once your weights have been cached, you can use them locally.
Below, we provide instructions for how to run your model offline on various device types, from CPU to GPU.
Image:
To use the GPU container, you must first install .
Image:
With your Inference server set up with local caching, you can run your model on images and video frames without an internet connection.
The weights will be loaded from the your Roboflow account over the Internet (via the License Server if you have configured it) with SSL encryption and stored safely in the Docker volume for up to 30 days.
Your inference results will contain a new expiration
key you can use to determine how long the Inference Server can continue to provide predictions before renewing its lease on the weights via an Internet or License Server connection. Once the weight expiration date drops below 7 days, the Inference Server will begin trying to renew the weights' lease once per hour until a connection to the Roboflow API is successfully made.
Once the lease has been renewed, the counter will reset to 30 days.
Your Jetson Jetpack 4.5 will already have installed.
Image:
Your Jetson Jetpack 4.6 will already have installed.
Image:
Your Jetson Jetpack 5.1 will already have installed.
Image:
Refer to the "" Inference documentation for guidance on how to run your model.