Skip to page content

A pair of misinformation experts warned about how AI could impact the 2024 election


TechCrunch's Kyle Wiggers, NewsGuard's Sarah Brandt and Adobe's Andy Parsons
NewsGuard's Sarah Brandt, center, and Adobe's Andy Parson, right, warned of the dangers generative AI poses to democracy in an appearance at TechCrunch Disrupt Wednesday with Kyle Wiggers, a reporter for the publication.
Mark Reinertson

The latest artificial-intelligence models pose a distinct threat to democracy, but there are steps people and companies can do to protect themselves and the public at large, a pair of experts said at the TechCrunch Disrupt conference.

In a joint appearance Wednesday at the San Francisco event, Sarah Brandt, an executive vice president of partnerships at NewsGuard Technologies Inc.; and Andy Parsons, a senior director of the Content Authenticity Initiative at Adobe Inc., said both their companies are looking closely at the ways AI could generate false or misleading information that could affect next year's election. And they're worried.

"Without a core foundation and objective truth that we can share, frankly — without exaggeration — democracy is at stake," Parsons said on stage at Moscone Center. "Being able to have objective conversations with other humans about shared truth is at stake."

The potential for technology to be used to create or spread misinformation isn't new. Russian agents infamously spread false information on Facebook in the run-up to the 2016 election. And a video clip of Nancy Pelosi (D-CA) in 2020 was edited to make it appear as if the then-speaker of the House was drunk.

But generative AI — the latest version of artificial intelligence — is potentially even more dangerous than past technologies. The new AI can be used to mimic human-created text, images and videos and human voices. As such, it can be used to create news articles or videos that seem to be of real events but aren't. Additionally, generative AI models have the tendency to respond to prompts with misleading or false information, a process dubbed "hallucinating."

And those models appear to be getting worse over time. GPT-4, the latest generative AI model from ChatGPT creator OpenAI LLC, is more likely to spread misinformation than prior models, according to a report earlier the year from New York-based NewsGuard, which offers a service that rates the reliability of various news sources. In response to prompts to generate news articles or Twitter posts that would report certain false information, GPT-4 complied 100% of the time, up from 80% for the prior model, according to the report. What's more, the latest model's responses were more persuasive than those of the prior one, NewsGuard said.

But companies and people can take steps to protect themselves and others from AI-generated false information, Parsons and Brandt said.

Adobe is putting in guardrails

Parsons pointed to Adobe Firefly, the San Jose company's generative AI tool that can be used inside of its Photoshop and other apps to do things like generate an image of a dog in a sweater on command. Adobe built into Firefly certain guardrails to prevent users from tapping it to create misleading images or other information, he said.

For example, Firefly can't be used to generate an image of the pope or Mickey Mouse, Parsons said. It also won't create violent images.

"It's just not possible," Parsons said.

Adobe can't guarantee that Firefly is completely safe and error free, but it is ensuring that as Firefly gets upgraded, any tools it adds have to be reviewed for ethics concerns. That can slow down development, he acknowledged, but Adobe's OK with that trade-off, he said.

"There are companies that lead with, 'You can do anything, and it's the most powerful possible tool,'" Parsons said. "Adobe's point of view is this is for creators doing creative work. And that's a smaller subset. It's not trying to solve all the world's problems."

Another step companies can take to protect the public is by using watermarking, which is a technique to offer information about the provenance of a document or image by either embedding it into or linking to it from the file. Watermarks could help identify data that's AI-generated or help distinguish original files from those that have been manipulated after the fact, they said.

Watermarks "are certainly a partial solution, but there's not going to be a panacea that can solve all problems," Brandt said. She continued: "It kind of takes an army, but all the different approaches can work."

There's also the hope that market forces will spur AI developers to ensure their models do a better job of filtering out or not producing false information, she said.

"With generative AI companies, their content needs to be trustworthy — otherwise, people won't use it," Brandt said. "If it continues to hallucinate, if it continues to propagate misinformation, if it continues to not cite sources — that's going to be less reliable than whatever generative AI company is making efforts to make sure that their content is reliable."


Keep Digging

News
News
News


SpotlightMore

Raghu Ravinutala, CEO and co-founder, Yellow Messenger
See More
Image via Getty
See More
SPOTLIGHT Awards
See More
Image via Getty Images
See More

Upcoming Events More

Aug
01
TBJ
Aug
22
TBJ
Aug
29
TBJ

Want to stay ahead of who & what is next? Sent twice-a-week, the Beat is your definitive look at the Bay Area’s innovation economy, offering news, analysis & more on the people, companies & ideas driving your city forward. Follow the Beat

Sign Up