Image augmentation can increase the generalizability of your model’s performance by increasing the diversity of learning examples for your model.
In Roboflow, select how many times you seek a given image to be augmented. For example, sliding to 3 means each of your images will receive 3 random augmentations based on the settings you select.
In Roboflow, augmentations are chained together. For example, if you select “flip horizontally” and “salt and pepper noise,” a given image will randomly be reflected as a horizontal flip and receive random salt and pepper noise.
Doing your augmentations through Roboflow ("offline augmentation") rather than at the time of training has a few key benefits.
Model reproducibility is increased. With Roboflow, you have a copy of how each image was augmented. For example, may find your model performs better on bright images rather than dark images, so you should collect more low-light training data.
Training time is decreased. Augmentations are CPU-constrained operations. When you’re training on your GPU and conducting augmentations on-the-fly, your GPU is often waiting for your CPU to provide augmented data at each epoch. That adds up!
Training costs are decreased. Because augmentations are CPU-constrained operations, your expensive, rented GPU is often waiting to be fed images for training. That’s wasted dollars.
Randomly flip (reflect) an image vertically or horizontally. Annotations are correctly mirrored. How does this work?
Horizontal: Flip the image’s NumPy array in the left/right direction.
Vertical: Flip the image’s NumPy array in the up/down direction.
Randomly rotate an image 90 degrees or 180 degrees.
Clockwise: Rotates an image 90 degrees clockwise.
Counter Clockwise: Rotates an image 90 degrees counter clockwise.
Upside Down: Rotates an image 180 degrees (upside down).
Randomly rotate an image clockwise or counter clockwise up to the degree amount the user selects. Learn when this is recommended.
Degrees: Select the highest amount an image will be randomly rotated clockwise or counter clockwise.
Randomly create a subset of an image. This can be used to improve your model's generalizability!
Percent: The percent area of the original image to drop. (e.g. The percentage area of the original image to keep. (e.g. a higher percentage contains a smaller amount of the original image.)
Note: annotations are affected. At present, our implementation drops any annotations that are completely out of frame. We crop any annotation that are partially out of frame to be in line with the edge of the image. For these kept annotations, we currently keep any amount of the original object detection area. We will soon provide the ability for you to select what percentage of annotation area you seek to maintain -- for example, imagine you only want to keep annotations that have at least 80% of the area of their original bounding box -- that will be supported.
Randomly distort an image across its horizontal or vertical axis. Why does this matter?
Horizontal: Select the highest amount an image will be randomly sheared across its x-axis.
Vertical: Select the highest amount an image will be randomly sheared across its y-axis.
Adjust the gamma exposure of an image to be brighter or darker.
Percent: Select the percent up to which an image will be randomly brightened or darkened. Up to 100 percent bright (completely white) or 100 percent dark (completely black).
Introduces Gaussian blur to an image. We walk through the details of Gaussian blur here.
Pixels: Determines the amount of blur applied to an image (i.e. the kernel size of the blurring process; all kernel sizes are odd). 25 pixels is max blur.
Injects random salt and pepper noise to an image. You can find details here.
Percent: Selects the percent of an image’s pixels that are affected, up to 25 percent.