Highlights:

  • Recently, DeepMind researchers introduced SynthID, a beta tool that seamlessly embeds a digital “watermark” into AI-generated images, originating from Google’s Imagen generative AI model within Vertex AI, the company’s dedicated platform for AI development.
  • Image generative models like Imagen can effortlessly generate photorealistic and lifelike images, paralleling their capacity to produce imaginative and fanciful artwork.

In collaboration, Google DeepMind, Alphabet Inc.’s AI research arm, and Google Cloud unveiled a watermarking tool. This aids in identifying AI-crafted images, distinguishing their origin from conventional artwork or graphics.

Recently, DeepMind researchers introduced SynthID, a beta tool that seamlessly embeds a digital “watermark” into AI-generated images, originating from Google’s Imagen generative AI model within Vertex AI, the company’s dedicated platform for AI development. This approach allows a select group of users to test the tool to designate their images as AI-generated.

The system operates using two distinct AI models. One AI model creates a meticulously generated mark that modifies the original in an imperceptible manner, but can be “seen” by a second AI model.

Simultaneously, this modification remains resilient against filters like Instagram or image editing software. It also withstands challenges posed by image resizing or cropping.

Users can input it into the tool to ascertain whether a specific image originates from Imagen and bears its watermark. The tool then furnishes them with a confidence level to validate the connection. There are three levels, the first being that the image was produced by AI, the second being that no watermark was detected, and the third being that the model may have detected a watermark and the image is dubious.

Watermarks represent one method among various approaches to ascertain image origin and authenticity. Another technique involves image identification via metadata – supplementary data attached to images, frequently added by software or camera systems. Nonetheless, metadata is susceptible to removal or alteration, rendering it inadequate to establish trust in an image’s authenticity.

While generative AI has garnered immense popularity for its creativity and innovative capabilities, it also harbors the potential for misuse and harm. Image generative models like Imagen can effortlessly generate photorealistic and lifelike images, paralleling their capacity to produce imaginative and fanciful artwork.

The proliferation of fabricated AI-generated images featuring political figures is increasing on social media platforms, complicating verifying their authenticity. In March, an AI-generated image of Pope Francis donning a white puffy jacket, crafted using Midjourney, confused as it circulated. Though mostly innocuous, this incident underscored the potency of such systems.

Within the announcement, Google DeepMind researchers Sven Gowal and Pushmeet Kohli conveyed their perspective, stating, “Being able to identify AI-generated content is critical to empowering people with knowledge of when they’re interacting with generated media, and for helping prevent the spread of misinformation.”

According to the researchers, the two models have been tested against a wide variety of image types to prepare them for numerous cases of various image types and optimized for a variety of scenarios, such as correctly identifying the watermark when the image has been modified and aligning the mark so that it better matches the original content, making it less noticeable.

Arun Chandrasekaran, a distinguished Vice President Analyst at Gartner, shared insights with a prominent media outlet, stating, “This is a significant announcement by Google. Clients that use Google’s text to image diffusion model, Imagen, now have a choice of adding watermark. Given the rise of deepfakes and increasing regulations across the globe, watermarking is an important step in combating deepfakes.”

As part of a White House initiative in July, Google is among the seven major tech firms voluntarily committing to AI safety measures. Furthermore, in collaboration with Microsoft Corp., OpenAI LP (developer of ChatGPT), and AI research startup Anthropic, the company played a key role in founding the Frontier Model Forum. This collaborative platform is committed to driving secure and ethically responsible progress in AI models. This technology’s introduction coincides with the European Union’s efforts to formalize the “AI Act” legislative framework to ensure safer and regulated utilization of AI across its member nations.

Chandrasekaran mentioned that the efficacy of DeepMind’s developed watermark technology remains uncertain and requires observation. The researchers themselves cautioned that the technology is not completely impervious to all forms of image manipulation, making it a “wait and see” scenario. He added, “Also, the watermark is specific to Google’s model and hopefully the technology companies will collaborate on standards that work across AI models.” The researchers added, “We hope our SynthID technology can work together with a broad range of solutions for creators and users across society, and we’re continuing to evolve SynthID by gathering feedback from users, enhancing its capabilities, and exploring new features.”