ITIF - The Information Technology and Innovation Foundation

15/08/2024 | News release | Distributed by Public on 15/08/2024 20:25

Watermarking in Images Will Not Solve AI-Generated Content Abuse

Advances in generative AI have made it easy to create digital images that closely resemble those made by humans, such as photos, illustrations, paintings, and many more. While these technological advancements offer significant creative potential, they also present risks for misuse. Activities like spreading misinformation, creating fake explicit images of individuals, and producing images that infringe on copyright protection rights were possible before but are now even easier to do using generative AI. In response to these concerns, some policymakers have mandated, or proposed mandating, labelling all AI-generated content with a watermark-a distinct and unique signal embedded in the AI content. However, there are significant technical limitations to applying watermarks to images and it is not a foolproof solution for stopping misinformation, deepfakes, or copyright violations.

Policymakers are turning to watermarking as a quick technological fix. For instance, China has already banned AI-generated media without watermarks. In Europe, Article 50(2) of the EU AI Act places a requirement on the providers of AI systems to enable the tracking and detection of AI-generated content. In the United States, a group of senators introduced the Content Origin and Integrity from Edited and Deepfakes Media Act which would direct the National Institute of Standards and Technology "to create standards and guidelines that help prove the origin of content and detect synthetic content, likely through watermarking."

Watermarking in images can be done in two ways-visible and invisible -and both are fraught with significant challenges and limitations. Visible watermarks usually overlay the photo and can take the form of brand logos, artists' signatures, or other visual markers. Humans can easily spot visible watermarks with the naked eye. But these have significant downsides. Unobtrusive watermarks, such as a small logo or other marker placed in the corner of an image, can be easily removed by cropping. Making a visible watermark harder to remove, like adding a repeating logo, can negatively impact the general aesthetic of the image.

Invisible watermarks embed hidden data in the pixels of an image. Notably, invisible watermarks do not store information in the metadata of the image file, but rather in the image itself. Since invisible watermarks imperceptibly alter the appearance of the AI-generated image, they rely on software-based detection tools to identify the presence of a watermark. AI systems can add these watermarks during or after image creation. Although some invisible watermarks can survive basic changes like cropping or rotating, none have been able to resist all persistent attempts to remove them. Whenever a watermarking technique in images is developed to withstand certain attacks, researchers eventually find ways to bypass it. Additionally, since many watermarking methods are proprietary and there are different techniques for watermarking images, there is no reliable way to detect all types of invisible watermarks.

Even if watermarking images worked reliably, it would not solve the problems associated with misuse of AI-generated content. For instance, AI-generated nude images can still cause harm and distress, even if they are clearly labeled as fake. Similarly, simply marking an image as AI-generated will not counteract confirmation bias, where people believe misinformation that aligns with their existing views. Furthermore, an over-reliance on watermarks for authenticity could also cause people to overlook other forms of misinformation, like manipulated real images. Finally, in cases involving AI-generated images that infringe on copyrighted material, labelling the content as "AI-generated" does not negate unlawful activity.

Given the significant limitations of watermarking AI-generated content, even a technical breakthrough would not address the full scope of concerns about AI misuse. Instead, policymakers should focus on enhancing media literacy, enforcing existing intellectual property rights, and exploring methods that allow people to trace and verify the history and origin of digital content, whether AI-generated or not.

Image Credit: Shutterstock / whiteMocca