Home / AI Image Generators Replicate Training Data, Pose Privacy Concerns

AI Image Generators Replicate Training Data, Pose Privacy Concerns

If you’ve been following the news, you’ve heard AI image generators have captured the public imagination. While concerns about “digital forgery” and copyright infringement have gotten much attention given the technology’s ability to mimic artist styles and subjects, concerns about privacy are also being raised.

Training images are pulled from publicly available internet sources regardless of whether they are in the public domain or copyrighted/leaked without consent [1]. And despite image properties and text-based descriptions reduced to statistical weights, a paper from researchers at Google, DeepMind, UC Berkeley, Princeton, and ETH Zurich (Nicholas Carlini et al [2]) demonstrates that a very small percentage of training images can be “memorized” (referring to a situation where a machine learning model has memorized the specific examples it was trained on, rather than learning the underlying patterns and features in the data). This means a non-zero chance that AI-generated images could be considered duplicates with copyright infringement or privacy invasion implications.

What constitutes legal “forgery” or “copyright infringement” is difficult to ascertain because: how much change is enough change? A more obvious concern is the ability of these generators to successfully recreate an image of people or personal information closely enough that despite being lower fidelity (more noise/grain), a human would view it as essentially the same image. Think over-compressed JPEG rather than an artist’s sketch of a subject they know well.

How closely this aligns to human understandings of memory is debated, but such “memorization” can be revealed upon successful “adversarial attacks” where the purpose is explicitly to locate and reveal one of these “memorized” images. With an understanding of how the most popular AI image generators work and a significant amount of computing resources and time, researchers Carlini et al were able to build and test small scale models that mimicked the behavior of those popular image generators (while assuring all provided training image data was truly public) and demonstrated that approximately 0.03% of image data is being “memorized.”

This is currently statistically insignificant, and only accomplished after significant amount of time and resources put into the effort, acting as attackers who knew some image captions for the images they sought to reveal, had access to partial copies of the images they sought to reveal, and/or had insider access to the AI image generator’s internal code and the training data set).

Yet, we are still at the very beginnings of what AI image generators will eventually become, and if mitigation strategies and safeguards are not put in place now, “memorization” rates could eventually become far higher with widespread threats to privacy.

References

[1] Artist finds private medical record photos in popular AI training data set | Ars Technica  

[2] [2301.13188] Extracting Training Data from Diffusion Models (arxiv.org)