Download Roboflow Model Weights
Last updated
Was this helpful?
Last updated
Was this helpful?
When you train a model on, or upload model weights to, Roboflow, your model is available for download on your own hardware through Roboflow Inference or a model weights file.
is an open source, scalable system that you can use to directly integrate your model into your application logic, or to run a microservice on your hardware through which you can run your model. Inference is designed for scale: Roboflow uses Inference to power our hosted API which has run hundreds of millions of inferences.
Inference supports running models on CPU and GPU devices, from laptop computers to cloud servers to NVIDIA Jetsons to Raspberry Pis.
When you deploy your model with Inference, your model weights are downloaded onto your hardware for use. Your weights are downloaded when you first run a model and stored locally. Predictions are made using your device's local compute and images are not sent into Roboflow's cloud by default.
To learn more about deploying models with Inference, refer to the .
Some paid plans also include the ability do download model weights for use on devices that Roboflow does not yet natively support (like Android and the Raspberry Pi AI Kit).
From within the Roboflow Platform, simply use the "Download Weights" button on the Versions, Models, or Deployments pages of your Project once you've trained a model and you will receive a PyTorch .pt
file that you can convert for use with embedded devices.
Your model weights will be downloaded and available in your local directory as a weights.pt
file.
Manual weights download is advanced functionality. Roboflow does not provide support for downloaded model weights used outside of its ecosystem.
You can also use the to using .