Skip to page content

OpenAI, Microsoft launch new tools to combat AI election disinformation


OpenAI Sora video of a styling woman walking in Tokyo
Tools like OpenAI's Sora have increased alarm about the potential for AI-powered election misinformation, a threat the AI startup and partner Microsoft are now tackling.
OpenAI

OpenAI and Microsoft are rolling out tools to help researchers detect AI generated content, in addition to launching a new fund that will support media literacy efforts, in order to combat election-related disinformation.

They have earmarked $2 million for voter education programs to boost AI literacy through what they're calling the "Societal Resilience Fund."

The fund will provide grants to organizations including AARP, the Coalition for Content Provenance and Authenticity (also known as C2PA), the International Institute for Democracy and Electoral Assistance and Partnership on AI.  

The goal is to "further AI education and literacy among voters and vulnerable communities," Microsoft said in a press release announcing the fund.

Microsoft and OpenAI are members of the Partnership on AI, a nonprofit based in San Francisco that was founded in 2016.

OpenAI also announced on Tuesday that it was joining the C2PA's steering committee which already counts other companies among its ranks including Microsoft, Google, Intel, Adobe and the BBC. C2PA sets standards for tracking digital content such as images, videos and audio.

OpenAI has already started adopting C2PA standards into its products.

"Earlier this year we began adding C2PA metadata to all images created and edited by DALL-E 3, our latest image model, in ChatGPT and the OpenAI API. We will be integrating C2PA metadata for Sora, our video generation model, when the model is launched broadly as well," OpenAI said in a blog post on Tuesday.

The inclusion of metadata isn't foolproof, but it's an important tool for tracking and identifying AI-generated content.

"People can still create deceptive content without this information (or can remove it), but they cannot easily fake or alter this information, making it an important resource to build trust. As adoption of the standard increases, this information can accompany content through its lifecycle of sharing, modification, and reuse," OpenAI wrote.

OpenAI is now making a new image detection tool available to researchers that will help predict whether an image was produced with DALL-E 3, the company's latest version of its image generator.

A tool for audio watermarking is also in the works, the company said, and for images, the new tool can correctly identify a DALL-E 3 produced images around 98% of the time, though it's not yet optimized for detecting images created by software from other companies.


Keep Digging

News
News
Fundings
News


SpotlightMore

Raghu Ravinutala, CEO and co-founder, Yellow Messenger
See More
Image via Getty
See More
SPOTLIGHT Awards
See More
Image via Getty Images
See More

Upcoming Events More

Aug
01
TBJ
Aug
22
TBJ
Aug
29
TBJ

Want to stay ahead of who & what is next? Sent twice-a-week, the Beat is your definitive look at the Bay Area’s innovation economy, offering news, analysis & more on the people, companies & ideas driving your city forward. Follow the Beat

Sign Up