One of the most common problems in Computer Vision is the lack of images when training ML models. In deep learning, a large amount of data is required to make neural networks to learn relevant characteristics of inputs and then to perform the inference process correctly; because when models are trained on limited samples they are not able to generalise to unseen data. Even if pre-trained models (transfer learning) are used, the images for the particular cases are often insufficient and the mo del is not trained correctly.
At Keepler we have faced this challenge in projects involving object detection in images, more specifically in anomaly detection projects. Given this situation we have seen the need to look for methods to generate synthetic images (data augmentation), with the aim of making viable projects with a reduced dataset of images. Specifically, we have researched two techniques:
- Generation of images using classical data augmentation procedures: distortions, rotations, colour change, etc. to the original images.
- Generation of images with GANs (Generative Adversarial Networks); specifically the use of Cycle GANs to make a context change (style transfer) to original images and generate new ones.
The generation of images or any type of data is very common in a large number of projects where data is limited. Increasing variability of training data allows for greater generalization of models; it can also reduce the cost of data collection and labelling.
Throughout the following white paper that you can download, we will see in detail the methods used, some simple and some more complex, to produce synthetic images needed in the training of computer vision models.
Download for free this white paper about Data Augmentation 👇

Title: How to use Data Augmentation when you have limited data
Authors: Ángela García, Data Scientist at Keepler & Adriana A. Bogdan, Data Scientist at Keepler
Leave A Comment