Highlights:

  • Engineers at Meta claim to be developing classifiers to determine whether an image is produced using artificial intelligence (AI) based on its C2PA or IPTC metadata.
  • The company is working on identifying AI-generated content for audio and video files in addition to photos.

Meta Platforms Inc. revealed intentions to launch labels that will specify whether general intelligence methods are used to create photographs posted to Facebook, Instagram, and Threads.

The business will begin appending the labels to user posts in the upcoming months. According to Meta, the upgrade is part of a larger initiative to better regulate content created by AI and posted to its platforms. The corporation will develop automatic software tools for identifying such information as part of the program.

The development work will concentrate on two open-source technologies, IPTC and C2PA. They enable the addition of metadata, or contextual information, to an image, describing its creation date and other pertinent features. If the image is created by an AI tool, it can also be indicated by using contextual data.

Engineers at Meta claim to be developing classifiers to determine whether an image is produced using artificial intelligence (AI) based on its C2PA or IPTC metadata. According to the company, it will be able to identify files created by widely used image generators due to the classifiers. The aim is to “label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools,” Meta President of Global Affairs Nick Clegg reported.

Some AI picture generators, like the recently launched Imagine with Meta AI service from Meta, incorporate both undetectable watermarks and metadata into the files they produce. The company stated that it is developing methods to make these watermarks harder to erase and modify. Furthermore, Meta is developing software that will identify AI-generated content even in the absence of invisible identifiers.

The company is working on identifying AI-generated content for audio and video files in addition to photos. As per Meta, models created by AI developers have not been set up to incorporate markers in audio and visual output. Users will need to provide notice when they submit such content to Meta’s platforms to overcome that restriction.

Clegg wrote, “We’ll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so. If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label.”

Several tech giants are also creating instruments to facilitate the identification of AI-generated content. An undetectable watermark might be added to AI-generated photographs with SynthID, a machine-learning technique that was unveiled by Google LLC’s DeepMind division in August of last year. The watermark remains intact even if the image it was applied to undergoes editing or compression.