Model Deployment Requirements
Notes on the information and settings required to deploy any model trained or hosted on Roboflow and/or Roboflow Universe.
Deploying models trained or hosted on Roboflow and Roboflow Universe requires:
- 1.A Roboflow account
- 2.
- 3.
- 4.The
Model ID
(i.eVersion Number
) of the Roboflow Train model, or hosted model from custom weights upload. - 5.Your Roboflow Private API Key (required if you authorize your workspace credentials in your code with
rf.Roboflow("PRIVATE_API_KEY")
instead ofroboflow.login()
orroboflow.login(force=True)
- 6.Usage of a computer/laptop/server architecture or edge device that is compatible with the Inference Server.
- Compatibility is based on the system architecture, performance expectations, and specs of your computer/laptop/server architecture or edge device.
NOTE: Inference Server 1.0 is the Legacy inference/deployment method. Inference Server 2.0 is the new, preferred method for deploying models trained or hosted with Roboflow.
Last modified 25d ago