Offline Mode
In certain Enterprise Deployments, you may need to take the Roboflow Deployment server offline for a period of time.
With the optional Offline Mode add-on available to enterprise customers, you can configure the Roboflow Inference Server to cache weights for up to 30 days. This allows it to run completely air-gapped or in locations where an Internet connection is not readily available.
To enable Offline Mode, you'll need to create and attach a Docker volume to /tmp/cache
on the Inference Server:
CPU
Image: roboflow / roboflow-inference-server-cpu
GPU
To use the GPU container, you must first install nvidia-container-runtime.
Image: roboflow / roboflow-inference-server-gpu
Jetson 4.5
Your Jetson Jetpack 4.5 will already have https://github.com/NVIDIA/nvidia-container-runtime installed.
Image: roboflow/roboflow-inference-server-jetson-4.5.0
Jetson 4.6
Your Jetson Jetpack 4.6 will already have https://github.com/NVIDIA/nvidia-container-runtime installed.
Image: roboflow/roboflow-inference-server-jetson-4.6.1
Jetson 5.1
Your Jetson Jetpack 5.1 will already have https://github.com/NVIDIA/nvidia-container-runtime installed.
Image: roboflow/roboflow-inference-server-jetson-5.1.1
Inference Results
The weights will be loaded from the your Roboflow account over the Internet (via the License Server if you have configured it) with SSL encryption and stored safely in the Docker volume for up to 30 days.
Your inference results will contain a new expiration
key you can use to determine how long the Inference Server can continue to provide predictions before renewing its lease on the weights via an Internet or License Server connection. Once the weight expiration date drops below 7 days, the Inference Server will begin trying to renew the weights' lease once per hour until a connection to the Roboflow API is successfully made.
Last updated