Model Deployment Requirements

Notes on the information and settings required to deploy any model trained or hosted on Roboflow and/or Roboflow Universe.

Deployment Requirements

Deploying models trained or hosted on Roboflow and Roboflow Universe requires:
  1. 1.
    A Roboflow account
  2. 2.
    Roboflow Workspace ID for the project (optional)
  3. 3.
    Roboflow Project ID for the project
  4. 4.
    The Model ID (i.e Version Number) of the Roboflow Train model, or hosted model from custom weights upload.
  5. 5.
    Your Roboflow Private API Key (required if you authorize your workspace credentials in your code with rf.Roboflow("PRIVATE_API_KEY") instead of roboflow.login() or roboflow.login(force=True)
  6. 6.
    Usage of a computer/laptop/server architecture or edge device that is compatible with the Inference Server.
    • Compatibility is based on the system architecture, performance expectations, and specs of your computer/laptop/server architecture or edge device.
NOTE: Inference Server 1.0 is the Legacy inference/deployment method. Inference Server 2.0 is the new, preferred method for deploying models trained or hosted with Roboflow.
Last modified 25d ago