Python Package

Learn how to install and use the Roboflow Python package.
The Roboflow Python package provides an easy-to-use interface through which you can interact with the Roboflow platform. With the Python package, you can:
  1. 1.
    List information about workspaces, projects, and versions.
  2. 2.
    Upload images to your projects.
  3. 3.
    Perform inference with specific model versions.
  4. 4.
    Visualize and save predictions that you've made on the model.


There are two ways you can install the Roboflow Python package: using pip or manually through our project GitHub repository.

From PyPi

pip install roboflow

From GitHub

git clone
cd roboflow-python
python3 -m venv
source venv/bin/activate
pip3 install -r requirements.txt

Install Microsoft Visual C++ Redistributable (Windows)

Microsoft Visual C++ redistributable, is needed for some Python packages. You can download it from the official Microsoft website:
After downloading the appropriate file, run the installer and follow the prompts to install the redistributable.

Getting Started

Below are functions you can use to get started with the Roboflow API.

Instantiate the Roboflow API

Before you start using the Roboflow API, make sure you have instantiated the Roboflow object in our Python package. You will need to provide your Roboflow API key. We recommend storing the key in an environmental variable then retrieving it in your script.
import roboflow
rf = roboflow.Roboflow(api_key=YOUR_API_KEY_HERE)

Instantiating a Project Object

To use a model in your Python script, you will need to specify the Project in which your model is stored on Roboflow. You can instantiate a project object using the following code:
# Load a certain project (workspace url is optional)
project = rf.project("PROJECT_ID")
To find your project ID, first select the export button on the generated dataset with which you need to work. You can find this button on the "Model" tab of your project in the Roboflow dashboard.
Next, click the "Show download code" option and then the "Continue" button.
The pop up will present you with the code you need to start using your model in your code. Copy-paste the code snippet into your code editor of choice. The output will look something like this:

Quick Start

Below is a code snippet with common functions used with the Roboflow API. We also have a Templates library that shows you how to use the results returned by the API for different patterns, from counting objects to drawing bounding boxes on images.
import roboflow
rf = roboflow.Roboflow(api_key=YOUR_API_KEY_HERE)
# List all projects for your workspace
workspace = rf.workspace()
# get a project
project = rf.workspace().project("PROJECT_ID")
# List all versions of a specific project
# Upload image to dataset
# Retrieve the model of a specific project
model = project.version("1").model
# predict on a local image
prediction = model.predict("YOUR_IMAGE.jpg")
# Predict on a hosted image
prediction = model.predict("YOUR_IMAGE.jpg", hosted=True)
# Plot the prediction
# Convert predictions to JSON
# Access JSON records for predictions
# Save the prediction as an image'predictions.jpg')
Accessing JSON Response Object values from inference results (Object Detection)
# Convert predictions to JSON
# Access JSON records for predictions
## Saving all x-values for predictions to a list, preds_x
# All available options: 'x', 'y', 'width', 'height',
# 'class', 'confidence', 'image_path', and 'prediction_type'
preds_x = []
for result in prediction.json()['predictions']:
Accessing JSON Response Object values for image size (all models)
# Access JSON records for the image size (width and height)
# Access JSON record for the image width
# Access JSON record for the image height

API Reference

Methods follow the following cascading pattern:
  • Roboflow()
  • Roboflow().workspace()
  • Roboflow().workspace().project()


Everything in the package works through the Roboflow() object. You must pass your private API key to the object, which can be found under the settings of your workspace. Full instructions for obtaining your API Key can be found here.
from roboflow import Roboflow
# obtaining your API key:
rf = Roboflow(api_key="YOUR_PRIVATE_API_KEY")


Workspace details such as name, URL, and a list of projects can be found the workspace() method:
workspace = rf.workspace()
# name
# Projects


Workspace objects can provide reference to a specific project using the project(projectID) method. See step 3 of Quick start sample code direct from your project for an easy way to find the projectID.
project = workspace.project("YOUR_PROJECT_ID")
You can upload a local image to this project with:
# default upload format
## if you want to attempt reuploading image on failure
# project.upload("UPLOAD_IMAGE.jpg", num_retry_uploads=3)
## uploading an image to a specific batch
# project.upload("UPLOAD_IMAGE.jpg", batch_name="YOUR_BATCH_NAME")
## uploading an image to a specific batch and split
# project.upload("UPLOAD_IMAGE.jpg", split="train", batch_name="YOUR_BATCH_NAME")
## if you want to attempt reuploading image on failure
# project.upload("UPLOAD_IMAGE.jpg", batch_name="YOUR_BATCH_NAME", num_retry_uploads=3)
You can upload a hosted image to this project with:
# default upload format
project.upload("", hosted=True)
## if you want to attempt reuploading image on failure
# project.upload("", hosted=True, num_retry_uploads=3)
## uploading an image to a specific batch
# project.upload("", hosted=True,
# batch_name="YOUR_BATCH_NAME")
## uploading an image to a specific batch and split
# project.upload("", hosted=True,
# split="train", batch_name="YOUR_BATCH_NAME")
## if you want to attempt reuploading image on failure
#project.upload("", hosted=True,
# batch_name="YOUR_BATCH_NAME", num_retry_uploads=3)
You can upload an image with its annotations to a project with:
import glob
import os
from roboflow import Roboflow
# roboflow params
api_key = "YOUR_API_KEY"
upload_project_name = "YOUR_PROJECT_NAME"
# glob params
dir_name = "PATH/TO/IMAGES"
image_dir = os.path.join(dir_name)
file_extension_type = ".jpg"
# annotation info (using .coco.json format as example)
# NOTE - any format listed on can be used here
annotation_filename = "PATH/TO/_annotations.coco.json"
annotation_str = open(annotation_filename, "r").read()
# pull down reference project
rf = Roboflow(api_key=api_key)
upload_project = rf.workspace().project(upload_project_name)
# create image glob
image_glob = glob.glob(image_dir + '/*' + file_extension_type)
# upload images
for image in image_glob:
response = upload_project.upload(image,annotation_filename)


Next, if you want to access the versions of a specific project:
all_versions = project.versions()# or
version = project.version(1)
If you've trained a model on this version, you can perform inference on the model with either a local or hosted image:
model = version.model
prediction = model.predict("YOUR_IMAGE.jpg", confidence=40, overlap=30)# or
prediction_hosted = model.predict("", hosted=True, confidence=40, overlap=30)
You can save a visualization of the prediction.
Or access your model's prediction as JSON.
prediction.json()# or

Adjusting the `model` object's confidence and overlap thresholds

The model object defaults to thresholds of confidence=40 and overlap=30 .
These can be updated by the model object's methods of the same name:
  • model.confidence adjusts confidence threshold
  • model.overlap adjusts overlap threshold
Full working example
# accessing the model
rf = Roboflow(api_key="YOUR_API_KEY")
inference_project = rf.workspace().project("YOUR_PROJECT_NAME")
model = inference_project.version(1).model
# adjusting the model confidence threshold
model.confidence = 50
model.overlap = 25
# using adjusted model
predictions = model.predict("PATH_TO_IMAGE").json()["predictions"]

Use images in a Locally Running Inference Server Container

If you have a locally running Roboflow inference server running through any of our container deploys, such as the NVIDIA Jetson or Raspberry Pi container, you can use version() to point towards that locally running server instead of the remote endpoint by specifying the IP address of the locally running inference server.
The local inference server must be running and available for communication before executing the python script! When using our docker containers with the --net=host flag, we recommend referencing via the localhost format.
local_inference_server_address = "http://localhost:9001/"
version_number = 1
local_model = project.version(version_number=version_number, local=local_inference_server_address).model