Adobe envisions a web littered with photos and videos labelled with information about where they came from. The company’s primary goal is to reduce the spread of visual misinformation, but the system could also benefit content creators who want to keep their names associated with their work.
Adobe’s Content Authenticity Initiative (CAI) project, first announced in 2019, has since released a whitepaper on a technology to do just that, integrated the system into its own software, and partnered with newsrooms and hardware makers to help universalize its vision.
In the same way that EXIF data stores information about aperture and shutter speed, the new standard also records information about a file’s creation, such as how it was created and edited. And if the company’s shared vision comes true, that metadata, which Adobe refers to as “content credentials,” will be widely viewable across social media platforms, image search platforms, image editors, search engines.
C2PA is the result of a collaboration between Adobe’s CAI and partners such as Microsoft, Sony, Intel, Twitter, and the BBC. The Wall Street Journal, Nikon, and the Associated Press have recently joined Adobe’s pledge to make content authentication more widely available.
The CAI’s primary goal is to combat visual misinformation on the internet, such as recirculated old images distorting the Ukrainian war or the infamous Nancy Pelosi “cheapfake.” However, a digital chain of custody could also benefit content creators who have had their work stolen or sold, a problem that has plagued visual artists for years and is now causing problems in NFT markets.
According to Parsons, the CAI is also attracting a surprising amount of interest from companies that create synthetic images and videos. Companies can ensure that generative images aren’t easily mistaken for the real thing by embedding origin metadata into the kind of AI creations we’re seeing from models like DALL-E.