Home Content News OpenAI Among Big Techs Who Collate To Protect Elections From AI Deception

OpenAI Among Big Techs Who Collate To Protect Elections From AI Deception

0
209

The signatories of the tech accord include OpenAI, Microsoft, Adobe, and social media platforms like Meta Platforms, TikTok, and X.

On February 16, a coalition of 20 technology companies announced their commitment to collaborate in preventing deceptive artificial intelligence (AI) content from influencing elections globally this year. This announcement, made at the Munich Security Conference, comes amid growing concerns about the potential misuse of generative AI, which can create text, images, and video in seconds, to sway major elections.

The signatories of the tech accord include companies involved in developing generative AI models, such as OpenAI, Microsoft, and Adobe, as well as social media platforms like Meta Platforms, TikTok, and X (formerly known as Twitter). These platforms are particularly focused on the challenge of keeping harmful content off their sites.

The agreement outlines commitments to develop tools for detecting misleading AI-generated content, launch public awareness campaigns to educate voters about deceptive content, and take action against such content on their services. Possible technologies for identifying AI-generated content or certifying its origin could involve watermarking or embedding metadata.

The accord does not specify a timeline for meeting these commitments or detail how each company will implement them. Nick Clegg, president of global affairs at Meta Platforms, favoured ensuring a unified and interoperable approach, rather than a fragmented set of individual commitments.

Generative AI has already been used to influence politics, as demonstrated by a robocall in January that used fake audio of U.S. President Joe Biden to discourage New Hampshire voters from participating in the state’s presidential primary election. 

Despite the focus on AI-generated text by tools like OpenAI’s ChatGPT, the tech companies are prioritising efforts to mitigate the harmful effects of AI-generated photos, videos, and audio. According to Dana Rao, Adobe’s chief trust officer, this is because people tend to be more sceptical of text and have a stronger emotional connection to audio and visual media, which the brain is more inclined to believe.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here