Download Model Weights

To run your Roboflow models on your own hardware, you can either use Roboflow Inference (the recommended, automatic method) or manually download Model Weights (for specific edge cases).

Roboflow Inferencearrow-up-right is our open-source, scalable system for running models locally on CPU and GPU devices.

This is the fastest and most reliable way to get started. When you use Inference, you don’t need to manage files or versioning; Roboflow Inference automatically fetches and caches your model weights the first time you run your code.

  • How it works: On your first inference request, the weights are downloaded from Roboflow’s servers and stored locally. All future predictions use this local cache—images are not sent to the cloud.

  • Deployment options:

Manual Model Weights Download

Sometimes you may need the raw weights file (e.g., a PyTorch .pt file) to run on devices Roboflow does not yet natively support, such as custom Android implementations.

See the Supported Models table for weights download compatibility.

circle-info

Premium Feature: Manual weights download is only available for paid users on Core plans and certain Enterprise customers. Read more on our Pricing pagearrow-up-right.

Method A: Roboflow Platform

Navigate to the Model version within your Project. If your plan allows, clicking the "Download Weights" button will allow you to download the weights . This will provide a file you can convert for use on embedded devices.

Download Weights button

Method B: Python SDK

You can also use the Roboflow Python package to download weights directly to your directory:

Note: Roboflow does not provide technical support for model weights used outside of the Roboflow Inference ecosystem. For the best experience, we recommend using the Inference path outlined in Section 1.

Last updated

Was this helpful?