Download Roboflow Model Weights
When you train a model on, or upload model weights to, Roboflow, your model is available for download on your own hardware through Roboflow Inference.
Roboflow gives users access to a range of computer vision models, some of which come with licensing considerations. Exporting model weight files (like a .pt
or .tf
file) directly from Roboflow is unavailable due to licensing limitations.
Roboflow provides all users with a commercial license to use all models which are hosted by Roboflow, either in our app or in our hosted APIs. This commercial license only applies for in-app or Roboflow-hosted use-cases. It does not apply in cases where a model is being self-hosted or used outside of the Roboflow ecosystem.
Downloading Models with Inference
Roboflow Inference is an open source, scalable system that you can use to directly integrate your model into your application logic, or to run a microservice on your hardware through which you can run your model. Inference is designed for scale: Roboflow uses Inference to power our hosted API which has run hundreds of millions of inferences.
Inference supports running models on CPU and GPU devices, from cloud servers to NVIDIA Jetsons to Raspberry Pis.
When you deploy your model with Inference, your model weights are downloaded onto your hardware for use. Your weights are downloaded when you first run a model, either through the Inference SDK or the Inference Docker container.
In order to stay compliant with licensing requirements, fine-tuned model weights can only be used in an Inference deployment. For free and Starter Plan customers, your Roboflow license permits use of your models on one device. Enterprise customers can deploy models on multiple devices.
To learn more about deploying models with Inference, refer to the Inference documentation.
Training in a Notebook
If you need access to the PyTorch or TensorFlow weights for a model, export your dataset and train a model using a notebook. Roboflow has several open source notebooks you can use to train models using popular architectures such as YOLO and SAM. Model’s trained using Roboflow’s cloud training are proprietary and not available for use outside of Roboflow without a license.
If you use an architecture with a restrictive license and want to use your model for enterprise use, you will need to contact the model vendor to secure a license. Otherwise, if you train a supported model on your own, you can upload it to Roboflow to deploy with Inference. Deploying with Inference allows you to deploy your model on one device (for free and Starter Plan customers) or multiple devices (for Enterprise customers). See our list of supported models you can upload to find out what models are covered.
Last updated