Links

Python Package

Learn how to install and use the Roboflow Python package.
The Roboflow Python package provides an easy-to-use interface through which you can interact with the Roboflow platform. With the Python package, you can:
  1. 1.
    List information about workspaces, projects, and versions.
  2. 2.
    Upload images to your projects.
  3. 3.
    Perform inference with specific model versions.
  4. 4.
    Visualize and save predictions that you've made on the model.

Installation

There are two ways you can install the Roboflow Python package: using pip or manually through our project GitHub repository.

From PyPi

pip install roboflow

From GitHub

git clone https://github.com/roboflow-ai/roboflow-python.git
cd roboflow-python
python3 -m venv
source venv/bin/activate
pip3 install -r requirements.txt

Getting Started

Below are functions you can use to get started with the Roboflow API.

Instantiate the Roboflow API

Before you start using the Roboflow API, make sure you have instantiated the Roboflow object in our Python package. You will need to provide your Roboflow API key. We recommend storing the key in an environmental variable then retrieving it in your script.
import roboflow
rf = roboflow.Roboflow(api_key=YOUR_API_KEY_HERE)

Instantiating a Project Object

To use a model in your Python script, you will need to specify the Project in which your model is stored on Roboflow. You can instantiate a project object using the following code:
# Load a certain project (workspace url is optional)
project = rf.project("PROJECT_ID")
To find your project ID, first select the export button on the generated dataset with which you need to work. You can find this button on the "Model" tab of your project in the Roboflow dashboard.
Next, click the "Show download code" option and then the "Continue" button.
The pop up will present you with the code you need to start using your model in your code. Copy-paste the code snippet into your code editor of choice. The output will look something like this:

Quick Start

Below is a code snippet with common functions used with the Roboflow API. We also have a Templates library that shows you how to use the results returned by the API for different patterns, from counting objects to drawing bounding boxes on images.
import roboflow
rf = roboflow.Roboflow(api_key=YOUR_API_KEY_HERE)
# List all projects for your workspace
workspace = rf.workspace()
# List all versions of a specific project
project.versions()
# Upload image to dataset
project.upload("UPLOAD_IMAGE.jpg")
# Retrieve the model of a specific project
model = project.version("1").model
# predict on a local image
prediction = model.predict("YOUR_IMAGE.jpg")
# Predict on a hosted image
prediction = model.predict("YOUR_IMAGE.jpg", hosted=True)
# Plot the prediction
prediction.plot()
# Convert predictions to JSON
prediction.json()
# Save the prediction as an image
prediction.save(output_path='predictions.jpg')

API Reference

Methods follow the following cascading pattern:
  • Roboflow()
  • Roboflow().workspace()
  • Roboflow().workspace().project()

Roboflow()

Everything in the package works through the Roboflow() object. You must pass your private API key to the object, which can be found under the settings of your workspace. Full instructions for obtaining your API Key can be found here.
from roboflow import Roboflow
# obtaining your API key: https://docs.roboflow.com/rest-api#obtaining-your-api-key
rf = Roboflow(api_key="YOUR_PRIVATE_API_KEY")

workspace()

Workspace details such as name, URL, and a list of projects can be found the workspace() method:
workspace = rf.workspace()
# name
workspace.name
# URL
workspace.url
# Projects
workspace.projects()

project()

Workspace objects can provide reference to a specific project using the project(projectID) method. See step 3 of Quick start sample code direct from your project for an easy way to find the projectID.
project = workspace.project("YOUR_PROJECT_ID")
You can upload a local image to this project with:
# default upload format
project.upload("UPLOAD_IMAGE.jpg")
## if you want to attempt reuploading image on failure
# project.upload("UPLOAD_IMAGE.jpg", num_retry_uploads=3)
## uploading an image to a specific batch
# project.upload("UPLOAD_IMAGE.jpg", batch_name="YOUR_BATCH_NAME")
## uploading an image to a specific batch and split
# project.upload("UPLOAD_IMAGE.jpg", split="train", batch_name="YOUR_BATCH_NAME")
## if you want to attempt reuploading image on failure
# project.upload("UPLOAD_IMAGE.jpg", batch_name="YOUR_BATCH_NAME", num_retry_uploads=3)
You can upload a hosted image to this project with:
# default upload format
project.upload("https://www.yourimageurl.com", hosted=True)
## if you want to attempt reuploading image on failure
# project.upload("https://www.yourimageurl.com", hosted=True, num_retry_uploads=3)
## uploading an image to a specific batch
# project.upload("https://www.yourimageurl.com", hosted=True,
# batch_name="YOUR_BATCH_NAME")
## uploading an image to a specific batch and split
# project.upload("https://www.yourimageurl.com", hosted=True,
# split="train", batch_name="YOUR_BATCH_NAME")
## if you want to attempt reuploading image on failure
#project.upload("https://www.yourimageurl.com", hosted=True,
# batch_name="YOUR_BATCH_NAME", num_retry_uploads=3)

version()

Next, if you want to access the versions of a specific project:
all_versions = project.versions()# or
version = project.version(1)
If you've trained a model on this version, you can perform inference on the model with either a local or hosted image:
model = version.model
prediction = model.predict("YOUR_IMAGE.jpg", confidence=40, overlap=30)# or
prediction_hosted = model.predict("https://www.yourimageurl.com", hosted=True, confidence=40, overlap=30)
You can save a visualization of the prediction.
prediction.plot()
prediction.save(output_path="predictions.jpg")
Or access your model's prediction as JSON.
prediction.json()# or
prediction_hosted.json()

Use images in a Locally Running Inference Server Container

If you have a locally running Roboflow inference server running through any of our container deploys, such as the NVIDIA Jetson or Raspberry Pi container, you can use version() to point towards that locally running server instead of the remote endpoint by specifying the IP address of the locally running inference server.
The local inference server must be running and available for communication before executing the python script! When using our docker containers with the --net=host flag, we recommend referencing via the localhost format.
local_inference_server_address = "http://localhost:9001/"
version_number = 1
local_model = project.version(version_number=version_number, local=local_inference_server_address)