Windows Subsystem (WSL) Deployment

For those of you that want to run our TRT container on Windows, another option besides Anaconda is using WSL2. Windows Subsystem for Linux allows anyone with Windows 10+ to run Ubuntu asynchronously with Windows via a terminal interface. You can find it and download Ubuntu 20.04.5 in the Microsoft Store for free. After installation, run WSL2 by typing Ubuntu into your windows search bar and starting the application.

Install WSL2 from Microsoft Store

Install Docker on WSL2 (Optional)

Ubuntu 20.04.5 LTS should come with Docker installed, but just in case it doesn't, below are some useful commands for installing Docker in Ubuntu. Similar to the Anaconda installation, you may also run Docker Desktop to bypass the need to install Docker. The full documentation can be found here: Install Docker Engine on Ubuntu

sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin

Run Docker Container Inside of WSL2

Once you have Docker successfully installed on your WSL2 environment, we can have Docker run the TRT container. To run the container, we need to use the command below which will start accepting inferences on port 9001.

sudo docker run --gpus all -p 9001:9001 roboflow/roboflow-inference-server-trt

Compile Engine and Run Inference

Now that the GPU TRT container is running in Docker. We can open up another Ubuntu terminal which will be using to send inference data to the docker container. Use ls and cd Navigate to the location of an image you want to run inference on and use the command below.

If this is your first inference, your model will take some time to compile. Following inferences after the model is built will be faster.

base64 your_img.jpg | curl -d @- "http://0.0.0.0:9001/[YOUR MODEL]/[YOUR VERSION]?api_key=[YOUR API KEY]"

Last updated