Adobe Launches Open Source Toolkit To Contain Visual Misinformation

0
633

Adobe envisions a web littered with photos and videos labelled with information about where they came from. The company’s primary goal is to reduce the spread of visual misinformation, but the system could also benefit content creators who want to keep their names associated with their work.

Adobe’s Content Authenticity Initiative (CAI) project, first announced in 2019, has since released a whitepaper on a technology to do just that, integrated the system into its own software, and partnered with newsrooms and hardware makers to help universalize its vision.

The company is now announcing the release of a three-part open source toolkit to get the technology into the hands of developers and out into the wild. Adobe’s new open source tools include a JavaScript SDK for developing ways to display content credentials in browsers, a command line utility, and a Rust SDK for developing desktop apps, mobile apps, and other experiences for creating, viewing, and verifying embedded content credentials.

In the same way that EXIF data stores information about aperture and shutter speed, the new standard also records information about a file’s creation, such as how it was created and edited. And if the company’s shared vision comes true, that metadata, which Adobe refers to as “content credentials,” will be widely viewable across social media platforms, image search platforms, image editors, search engines.

C2PA is the result of a collaboration between Adobe’s CAI and partners such as Microsoft, Sony, Intel, Twitter, and the BBC. The Wall Street Journal, Nikon, and the Associated Press have recently joined Adobe’s pledge to make content authentication more widely available.

With the new tools, a social media platform could use Adobe’s JavaScript to quickly have all of its images and videos display the content credentials, which appear as a mouse-over icon in the upper-right corner. Instead of requiring a dedicated team and a larger software buildout, that implementation could be completed in a few weeks by a couple of developers.

The CAI’s primary goal is to combat visual misinformation on the internet, such as recirculated old images distorting the Ukrainian war or the infamous Nancy Pelosi “cheapfake.” However, a digital chain of custody could also benefit content creators who have had their work stolen or sold, a problem that has plagued visual artists for years and is now causing problems in NFT markets.

According to Parsons, the CAI is also attracting a surprising amount of interest from companies that create synthetic images and videos. Companies can ensure that generative images aren’t easily mistaken for the real thing by embedding origin metadata into the kind of AI creations we’re seeing from models like DALL-E.

LEAVE A REPLY

Please enter your comment!
Please enter your name here