Roboflow Docs
DashboardResourcesProducts
  • Product Documentation
  • Developer Reference
  • Changelog
  • Roboflow Documentation
  • Quickstart
  • Workspaces
    • Workspaces, Projects, and Models
    • Create a Workspace
    • Manage Team Members
    • Role-Based Access Control
    • Usage Based Pricing
      • View Plan Usage
      • Purchase Pre-Paid Credits
      • Flex Credit Billing
    • Rename a Workspace
    • Delete a Workspace
  • Workflows
    • What is Workflows?
    • Create a Workflow
    • Build a Workflow
    • Test a Workflow
    • Deploy a Workflow
    • Workflow Examples
      • Multimodal Model Workflow
    • Share a Workflow
    • Workflows AI Assistant
  • Dataset Management
    • Create a Project
    • Upload Images, Videos, and Annotations
      • Import Data from Cloud Providers
        • AWS S3 Bucket
        • Azure Blob Storage
        • Google Cloud Storage
      • Import from Roboflow Universe
    • Manage Datasets
      • Dataset Batches
      • Search a Dataset
      • Set Dataset Classes
      • Add Tags to Images
      • Create an Annotation Attribute
      • Download an Image
      • Delete an Image
    • Dataset Versions
      • Create a Dataset Version
      • Preprocess Images
      • Augment Images
      • Delete a Version
      • Export a Dataset Version
    • Dataset Analytics
    • Merge Projects
    • Rename a Project
    • Delete a Project
    • Project Folders
    • Make a Project Public
  • Annotate
    • Introduction to Roboflow Annotate
    • Annotate an Image
      • Keyboard Shortcuts
      • Comment on an Image
      • Annotation History
      • Similarity Search
    • AI Labeling
      • Label Assist
      • Enhanced Smart Polygon with SAM
        • Smart Polygon (Legacy)
      • Box Prompting
      • Auto Label
    • Set Keypoint Skeletons
    • Annotate Keypoints
    • Annotate Multimodal Data
    • Collaborate on Annotations
    • Annotation Insights
  • Managed Labeling
  • Train
    • Train a Model
      • Train from a Universe Checkpoint
      • Train from Azure Vision
      • Train from Google Cloud
    • Roboflow Instant
    • Cancel a Training Job
    • Stop Training Early
    • View Training Results
    • View Trained Models
    • Evaluate Trained Models
  • Download a Dataset Version
  • Deploy
    • Deploy a Model or Workflow
    • Managed Deployments
    • Serverless Hosted API V2
      • Use in a Workflow
      • Use with the REST API
      • Run an Instant Model
    • Serverless Hosted API
      • Object Detection
      • Classification
      • Instance Segmentation
        • Semantic Segmentation
      • Keypoint Detection
      • Foundation Models
        • CLIP
        • OCR
        • YOLO-World
      • Video Inference
        • Use a Fine-Tuned Model
        • Use CLIP
        • Use Gaze Detection
        • API Reference
        • Video Inference JSON Output Format
      • Pre-Trained Model APIs
        • Blur People API
        • OCR API
        • Logistics API
        • Image Tagging API
        • People Detection API
        • Fish Detection API
        • Bird Detection API
        • PPE Detection API
        • Barcode Detection API
        • License Plate Detection API
        • Ceramic Defect Detection API
        • Metal Defect Detection API
    • Dedicated Deployments
      • Create a Dedicated Deployment
      • Make Requests to a Dedicated Deployment
      • Manage Dedicated Deployments with an API
    • Batch Processing
    • SDKs
      • Python inference-sdk
      • Web Browser
        • inferencejs Reference
        • inferencejs Requirements
      • Lens Studio
        • Changelog - Lens Studio
      • Luxonis OAK
    • Upload Custom Model Weights
    • Download Model Weights
    • Enterprise Deployment
      • License Server
      • Offline Mode
      • Kubernetes
      • Docker Compose
    • Monitor Deployed Models
      • Alerting
  • Universe
    • What is Roboflow Universe?
    • Find a Dataset on Universe
    • Explore Images in a Universe Dataset
    • Fork a Universe Dataset
    • Find a Model on Universe
    • Download a Universe Dataset
  • Set a Project Description
  • View Project Analytics
  • Support
    • Share a Workspace with Support
    • Delete Your Roboflow Account
    • Apply for Academic Credits
Powered by GitBook
On this page
  • Step 1: Annotate at least one example of each class
  • Step 2: Activate the Box Prompting tool
  • Step 3: Fine-tune your predictions
  • Step 4: Approve predictions
  • Step 5: Run on more images
  • Best Practices
  • Limitations

Was this helpful?

  1. Annotate
  2. AI Labeling

Box Prompting

Annotate images with our AI Labeling tool that improves with each example.

PreviousSmart Polygon (Legacy)NextAuto Label

Last updated 3 months ago

Was this helpful?

Box Prompting takes one (or more) prompt bounding boxes to generate annotations for similar objects. Each example fine-tunes a model that improves with each image. With Box Prompting, you save hours of time manually drawing bounding boxes around objects that appear multiple times in a dataset.

Step 1: Annotate at least one example of each class

Box prompting requires you to create at least one bounding box annotation to provide as an example for generating predictions.

Step 2: Activate the Box Prompting tool

Make sure the Box Prompting tool is active to see the magic happen! Box Prompting will generate predictions based on your annotations. Predictions will appear with dotted lines any time you save or delete an annotation.

Predictions are not annotations and will not be saved when navigating away from the image. See Step 4 for how to save your predictions.

Step 3: Fine-tune your predictions

From here, you can:

Adjust the confidence

Adjust the confidence threshold using the slider to adjust the number of predictions displayed. Higher confidence means less predictions.

Provide negative examples

If any incorrect predictions occur, you can right click on the box and select "Convert to Negative". This will teach the model to not label this type of object in the future. Negative examples will appear shaded in.

You can also convert existing annotations to negative through the same right click menu.

Add additional examples

Any additional annotations you create with other labels will help the model differentiate between different objects in the image. After adding more examples, you can click "Predict" to generate new predictions.

For best results, provide 1-2 examples of every unique object in your images.

You may find it easier to fine-tune predictions by lowering the confidence & converting excess predictions to negative, rather than setting the confidence high.

Step 4: Approve predictions

Once the predictions are to your liking, click "Approve Predictions". This will convert all predictions to annotations, and ensure they'll be saved if you navigate away.

From here, you can edit & delete annotations as usual.

Step 5: Run on more images

As you annotate, images are added to your training set.

As you annotate images, Box Prompting will be trained on any images with human-drawn or human-edited annotations. (Predictions that are approved without edits will not be included.)

This means you can click "Predict" on new images without drawing a single box & still generate predictions! You can check the number of images included in the training set in the tool menu.

Best Practices

Provide an example for each visually distinct object.

On images that contain several objects with similar appearances, it can be helpful to provide at least one example for each significant color, size or camera angle variation.

Annotate similar images in the same annotation session.

Box prompting works best when your images have similar contents, allowing you to quickly reuse your training examples while generating predictions.

Tighten bounding boxes to avoid accumulating errors.

Often, the predicted bounding box is larger than it should be - reduce the size to avoid erroneously including parts of the background.

Box Prompting works best on photographs or still frames.

Although we can provide predictions for documents or computer graphics, Box Prompting works best for identifying repetitive items in photos.

Provide negative examples to improve accuracy.

If you notice a particular annotation class produces false positive predictions, you can right click & select "Convert to Negative" to provide a negative example to the Box Prompting model.

Limitations

The Box Prompting model has to downscale images when inferring. Therefore, when trying to detect small items on a large image you may get unsatisfactory results.

You get optimal results with images 1000px or less in either dimension, and you'll get a warning when the image is 2000px+ with small bounding boxes (less than ~5% of the width/height) that will not work well

These limitations only apply to Box Prompting. When model training, you can apply Tiling as a during version generation to prevent these effects for trained models.

preprocessing step
Activate Box Prompting in the annotation toolbar.
Adjust the number of predictions displayed by changing the confidence threshold.
Right click on incorrect predictions and Convert to Negative to provide a negative prompt.
Approve predictions to save them to the image.