Abstract
The following pages explore the use of generative models for realistic image anonymization. In summary, this thesis aims to address two primary objectives. First, develop generative models for synthesizing human figures for the purpose of anonymization. Secondly, evaluate the impact of anonymization on the development of computer vision algorithms. This thesis culminates into four key contributions. First, it introduces Deep Privacy, an open-source framework for realistic anonymization of human faces and bodies. Deep Privacy is the first framework to effectively handle the challenges of in-the-wild image anonymization, such as handling overlapping objects, partial bodies, and extreme poses. Secondly, a variety of Generative Adversarial Networks (GANs) are proposed for synthesizing realistic human figures. To the best of our knowledge, the proposed GANs are the first to synthesize human figures in-the-wild effectively. The third contribution comprises of two open-source datasets, namely Flickr Diverse Faces (FDF) and Flickr Diverse Humans (FDH). Unlike previous datasets, FDF and FDH are large-scale and diverse datasets consisting of unfiltered images that capture the complexities of realistic image anonymization. Finally, the thesis presents an empirical evaluation of Deep Privacy and compare it to traditional anonymization. Specifically, the impact of anonymization is evaluated for training computer vision models, with a focus on autonomous vehicle settings. This thesis demonstrates that realistic anonymization is a superior alternative to traditional methods and a promising method to replace privacy-sensitive data with artificial data. We are confident that our open-source framework and datasets will be highly useful for practitioners and researchers seeking to anonymize their visual data.