Skip to content

A Future Method for Data Augmentation? AI-Powered Inpainting

Published: at 03:30 AM

The State of Data Annotation for Computer Vision

Real data continues to be hard to get. Here is the typical annotation pipeline for a computer vision project. First, researchers look to see if existing labeled datasets can be applied to their use case. Big names here include Common Objects in Common Contexts (MS-COCO), ImageNet, and Cityscapes. If there is sufficient data for the use case, researchers will utilize models pre-trained on these datasets for transfer learning. This approach has the massive benefit of decreasing, or even eliminating, the annotation pipeline of the project. For more niche projects, this is rarely the case.

For example, in the first research project of my PhD, I need to develop an object detection model for police vehicles in a specific area of the United States. It would be presently impossible to generate a detection model for the term “police vehicle”, as vehicle colors, makes, and other characteristics vary from area to area. So, even if labeled police vehicle data were publically available (which, to my best knowledge, it is not), I would likely have trouble utilizing transfer learning.

So, enter the second, less glamorous approach; accquiring human-labeled data. If you are particularly ambitious (or have lots of time on your hands), you can do this yourself. However, for most research projects, it is more efficient to use project funds and outsource this work to skilled labelers on platforms like Amazon Mechanical Turk or Scale.AI. Most modern models require at least 1,500 images for custom object learning. Once we have a sufficient dataset, we can generate our own model specific to our use case.

As the size of a project scales, so does the annotation pipeline. Annotation can end up costing either thousands of dollars or hundreds of hours, neither of which are particularly accessible. So, new innovation targets either making labeled data easier to get, or how to make existing data go further.

Data Augmentation

Data Augmentation is essentially the “making existing data go further” front of current research. Most popular object detection models levy data augmentation in some way. For example, here are some of the data augmentations that the YOLO (You-Only-Look-Once) realtime detection model incorporates:

| YOLO Data Augmentations | |:—:| yol | YOLO - Data Augmentations Graphic |

In YOLO, data augmentation is very easily controlled by adjusting hyperparameters. No manual coding or development is required to enable a host of types of data augmentation, and these techniques offer a proven performance boost. Data augmentation can make a smaller number of images go further during training, and is definitely still a growing field.

A New Approach: AI-Powered Inpainting

With the advent of image synthesis models like OpenAI’s DALLE-2 and Stability.AI’s Stable Diffusion, I see a potential new method for data augmentation: AI-powered inpainting. This type of context-aware editing could provide high-quality synthetic data of a degree previously unheard of. For example, one could take a manually-labeled image of a Chevy Camaro coupe on the highway and replace it with a Honda Accord sedan. Of course, there are restrictions to this type of augmentation. Here are some initial ones I’ve come up with:

Proof-Of-Concept

To see if AI-powered inpainting was even feasible right now, I set out to generate some inpainted training images using Stable Diffusion version 2. I utilize Stability.AI’s inpainting version of the model found here. My goal is to do exactly what I described above: painting a Honda Accord over a Chevy Camero. I wrote and debugged code in a Jupyter Notebook until I achieved my goal! Here are the results. The initial photo is on the left; the inpainted image is on the right.

Chevy Camaro - Original PhotoHonda Accord - Inpainted
Inpainting - “black, new honda accord driving on the road, high resolution”

Pretty realistic! The inpainted image, upon close inspection, isn’t quite photorealistic, but I wonder if even in this state it could be useful to a computer vision model. As I gain more bandwith next semester I might explore this idea further.

Discussion

I think this approach could be useful, especially as Stable Diffusion and other generative models continue to become more robust, quick, and realistic. I had a great luxury in utilizing one of my advisor’s top-of-the-line RTX A6000 GPUs for this endeavour, and images still took a good 10-15 seconds to generate. Even so, I’m excited to see how these models continue to change the landscape of machine learning, especially as quality labeled data remains elusive and expensive. To my mind, this approach is essentially leveraging transfer learning to insert realistic representations of an object in a setting said object usually occurs in. Some future work I can think of:

I hope this code and article is useful, and feel free to use it as a baseline for your own projects / explorations! Until next time.

Downloadables