Roboflow Docs
DashboardForum
  • Build Vision Models with Roboflow
  • Quickstart
  • Roboflow Enterprise
  • Workspaces
    • Create a Workspace
    • Delete a Workspace
    • Add Team Members
    • Role-Based Access Control
  • Usage Based Pricing
  • Workflows
    • Create a Workflow
    • Build a Workflow
    • Test a Workflow
    • Deploy a Workflow
    • Workflow Examples
      • Multimodal Model Workflow
    • Share a Workflow
      • Workflow Sharing Configuration
    • Advance Workflow Topics
      • JSON Editor
  • Datasets
    • Create a Project
    • Upload Data
      • Import Data from Cloud Providers
        • AWS S3 Bucket
        • Azure Blob Storage
        • Google Cloud Storage
      • Upload Video
      • Import from Roboflow Universe
    • Manage Batches
    • Search a Dataset
    • Create a Dataset Version
    • Preprocess Images
    • Create Augmented Images
    • Add Tags to Images
    • Manage Classes
    • Edit Keypoint Skeletons
    • Create an Annotation Attribute
    • Export Versions
    • Dataset Analytics
    • Merge Projects
    • Delete an Image
    • Delete a Version
    • Delete a Project
    • Project Folders
  • Annotate
    • Annotation Tools
    • Use Roboflow Annotate
      • Annotate Keypoints
      • Label Assist (AI Labeling)
      • Enhanced Smart Polygon with SAM (AI Labeling)
      • Smart Polygon (AI Labeling)
      • Keyboard Shortcuts
      • Comment on an Image
      • Annotation History
      • Similarity Search
      • Box Prompting (AI Labeling)
    • Automated Annotation with Auto Label
    • Collaborate on Annotations
    • Annotation Insights
    • Labeling Best Practices
  • Train
    • Train a Model in Roboflow
      • Train from Scratch
      • Train from a Universe Checkpoint
      • Python Package
      • Roboflow Notebooks (GitHub)
    • Train from Azure Vision
    • Train from Google Cloud
    • View Training Results
    • Evaluate Trained Models
    • Custom Training Notebooks
  • Deploy
    • Deployment Overview
      • Roboflow Managed Deployments Overview
    • Serverless Hosted API
      • Object Detection
      • Classification
      • Instance Segmentation
        • Semantic Segmentation
      • Keypoint Detection
      • Foundation Models
        • CLIP
        • OCR
        • YOLO-World
      • Video Inference
        • Use a Fine-Tuned Model
        • Use CLIP
        • Use Gaze Detection
        • API Reference
        • Video Inference JSON Output Format
      • Pre-Trained Model APIs
        • Blur People API
        • OCR API
        • Logistics API
        • Image Tagging API
        • People Detection API
        • Fish Detection API
        • Bird Detection API
        • PPE Detection API
        • Barcode Detection API
        • License Plate Detection API
        • Ceramic Defect Detection API
        • Metal Defect Detection API
    • Serverless Hosted API V2
    • Dedicated Deployments
      • How to create a dedicated deployment (Roboflow App)
      • How to create a dedicated deployment (Roboflow CLI)
      • How to use a dedicated deployment
      • How to manage dedicated deployment using HTTP APIs
    • SDKs
      • Python inference-sdk
      • Web Browser
        • inferencejs Reference
        • inferencejs Requirements
      • Lens Studio
        • Changelog - Lens Studio
      • Mobile iOS
      • Luxonis OAK
    • Upload Custom Weights
    • Download Roboflow Model Weights
    • Enterprise Deployment
      • License Server
      • Offline Mode
      • Kubernetes
      • Docker Compose
    • Model Monitoring
      • Alerting
  • Roboflow CLI
    • Introduction
    • Installation and Authentication
    • Getting Help
    • Upload Dataset
    • Download Dataset
    • Run Inference
  • API Reference
    • Introduction
    • Python Package
    • REST API Structure
    • Authentication
    • Workspace and Project IDs
    • Workspaces
    • Workspace Image Query
    • Batches
    • Annotation Jobs
    • Projects
      • Initialize
      • Create
      • Project Folders API
    • Images
      • Upload Images
      • Image Details
      • Upload Dataset
      • Upload an Annotation
      • Search
      • Tags
    • Versions
      • View a Version
      • Create a Project Version
    • Inference
    • Export Data
    • Train a Model
    • Annotation Insights
      • Annotation Insights (Legacy Endpoint)
    • Model Monitoring
      • Custom Metadata
      • Inference Result Stats
  • Support
    • Share a Workspace with Support
    • Account Deletion
    • Frequently Asked Questions
Powered by GitBook
On this page
  • Creating Augmented Images prior to Training
  • Add Augmentations
  • How Augmentations Are Applied
  • Augmentation Options
  • Flip
  • 90 Degree Rotations
  • Random Rotation
  • Random Crop
  • Random Shear
  • Exposure
  • Blur
  • Random Noise
  • Bounding Box Augmentation
  • See Also

Was this helpful?

  1. Datasets

Create Augmented Images

Create augmented images to improve model performance.

Image augmentation is a step where augmentations are applied to existing images in your dataset. This process can help improve the ability of your model to generalize and thus perform more effectively on unseen images.

Roboflow supports the following augmentations:

  • Flip

  • 90 degree rotation

  • Random rotation

  • Random crop

  • Random shear

  • Blur

  • Exposure

  • Random noise

  • Cutout (paid plans only)

  • Mosaic (paid plans only)

We recommend starting a project with no augmentations. This allows you to evaluate the quality of your raw dataset. If you add augmentations and your dataset doesn't perform as well as expected, you will not have a baseline to which you can compare model performance.

If your doesn't perform well without augmentations, you may need to investigate class balance, data representation, and dataset size. When you have a dataset on which you have successfully trained a model without augmentations, you can add augmentations to further help improve model performance.

Creating Augmented Images prior to Training

Doing your augmentations through in a version ("offline augmentation") rather than at the time of training has a few key benefits.

  1. Model reproducibility is increased. With Roboflow, you have a copy of how each image was augmented. For example, may find your model performs better on bright images rather than dark images, so you should collect more low-light training data.

  2. Training time is decreased. Augmentations are CPU-constrained operations. When you’re training on your GPU and conducting augmentations on-the-fly, your GPU is often waiting for your CPU to provide augmented data at each epoch. That adds up!

  3. Training costs are decreased. Because augmentations are CPU-constrained operations, your expensive, rented GPU is often waiting to be fed images for training.

Add Augmentations

To add augmentations, go to the Versions tab associated with your project in the Roboflow dashboard. Then, click "Augmentations" to set up augmentations for your project.

You can select how many times you seek a given image to be augmented. For example, sliding to 3 means each of your images will receive 2 random augmentations based on the settings you select.

How Augmentations Are Applied

Augmentations are chained together, with randomization for the augmentation settings, and values for each setting, applied to each augmented image. Any images that appear as duplicates during this process are filtered out of the created version.

For example, if you select “flip horizontally” and “salt and pepper noise,” a given image will randomly be reflected as a horizontal flip and receive random salt and pepper noise.

Augmentation Options

Below are the augmentations supported by Roboflow. The parameters you can customize are in bullet points.

Flip

Randomly flip (reflect) an image vertically or horizontally. Annotations are correctly mirrored.

90 Degree Rotations

Randomly rotate an image 90 degrees or 180 degrees.

  • Clockwise: Rotates an image 90 degrees clockwise.

  • Counter Clockwise: Rotates an image 90 degrees counter clockwise.

  • Upside Down: Rotates an image 180 degrees (upside down).

Random Rotation

  • Degrees: Select the highest amount an image will be randomly rotated clockwise or counter clockwise.

Random Crop

  • Percent: The percent area of the original image to drop. (e.g. The percentage area of the original image to keep. (e.g. a higher percentage contains a smaller amount of the original image.)

Note: annotations are affected. At present, our implementation drops any annotations that are completely out of frame. We crop any annotation that are partially out of frame to be in line with the edge of the image. For these kept annotations, we currently keep any amount of the original object detection area. We will soon provide the ability for you to select what percentage of annotation area you seek to maintain -- for example, imagine you only want to keep annotations that have at least 80% of the area of their original bounding box -- that will be supported.

Random Shear

  • Horizontal: Select the highest amount an image will be randomly sheared across its x-axis.

  • Vertical: Select the highest amount an image will be randomly sheared across its y-axis.

Exposure

Adjust the gamma exposure of an image to be brighter or darker.

  • Percent: Select the percent up to which an image will be randomly brightened or darkened. Up to 100 percent bright (completely white) or 100 percent dark (completely black).

Blur

  • Pixels: Determines the amount of blur applied to an image (i.e. the kernel size of the blurring process; all kernel sizes are odd). 25 pixels is max blur.

Random Noise

  • Percent: Selects the percent of an image’s pixels that are affected, up to 25 percent.

Bounding Box Augmentation

Bounding box level augmentation creates new training data by only altering the content of a source image’s bounding boxes. In doing so, developers have greater control over creating training data that is more suitable to their problem’s conditions.

See Also

PreviousPreprocess ImagesNextAdd Tags to Images

Last updated 1 year ago

Was this helpful?

Example case: 3x augmentation --> 1 of the created images is created only with the you have applied. The other 2 images receive augmentations, leaving you with 3 times the number of images for each source image.

Horizontal: Flip the image’s in the left/right direction.

Vertical: Flip the image’s in the up/down direction.

Randomly rotate an image clockwise or counter clockwise up to the degree amount the user selects. .

Randomly create a subset of an image. !

Randomly distort an image across its horizontal or vertical axis. ?

Introduces Gaussian blur to an image. .

Injects random salt and pepper noise to an image. .

A from Google researchers introduces the idea of using bounding box only augmentation to create optimal data for their models. In this paper, researchers showed bounding box only modifications create systemic improvements, especially for models that were fit on small datasets.

See our blog post on for more.

preprocessing settings
NumPy array
NumPy array
Learn when this is recommended
This can be used to improve your model's generalizability
Why does this matter
We walk through the details of Gaussian blur here
You can find details here
2019 paper
how bounding box augmentation improves computer vision model fit
How do image flip augmentations work?