Skip to page content

Will AI destroy humanity? 42% of CEOs surveyed think so.


Close-up of  female hand touching illuminated digital screen displaying a warning sign
Among CEOs who took part in a recent survey about artificial intelligence, 8% said AI could lead to the end of humanity within the next five years while another 34% said that kind of outcome could happen within 10 years.
Oscar Wong/Getty Images

The idea that artificial intelligence is an existential threat to humanity has many of the nation's business leaders more than a little spooked.

According to a survey of 119 chief executive officers who attended a Yale CEO Summit this week, 42% believe AI has the potential to destroy humanity within the next decade.

The survey was conducted at a virtual event held by Yale Prof. Jeffrey Sonnenfeld at his Chief Executive Leadership Institute.

"It's pretty dark and alarming," Sonnenfeld said in an interview with CNN. The network had received exclusive access to the polling.

With the new survey, Sonnenfeld told CNN that Walmart CEO Doug McMillion, Coca-Cola CEO James Quincy as well as leaders from information technology, pharmaceutical, media and manufacturing industries participated.

In the survey, 8% of CEOs said the potential end of humanity could happen in the next five years while another 34% put the time frame at within 10 years. Still, a majority — the remaining 58% — said they were not worried about AI in this way.

Among those who have come to AI's defense recently is Marc Andreessen, a general partner and cofounder of the venture capital firm Andreessen Horowitz. In a blog post, Andreessen said those warning about AI were "freaking out" as he described them as being part of a "cult." His firm has made several AI-related investments.

Sonnenfeld told CNN that when it comes to AI, business leaders fall into five groups:

  • "Curious creators" — "They are like Robert Oppenheimer," Sonnenfeld said, referring to the famed theoretical physicist who is often called the "father" of the nuclear bomb.
  • "Euphoric true believers," who only see technology as a force of good.
  • "Commercial profiteers," who want to cash in on the new technology. “They don’t know what they’re doing, but they’re racing into it,” Sonnenfeld said.
  •  And, finally, "alarmist activists" and "global governance advocates," who are looking to regulate or clamp down on AI development.

The CEOs' concerns echo those of a number of corporate executives, including those in Silicon Valley and the broader tech sector who are investing in and working on AI-related projects. In May, Sam Altman, CEO and cofounder of OpenAI LLC, the San Francisco company behind the artificial intelligence chatbot ChatGPT, was among the signatories of a letter warning of the risks posed by letting AI development go without guardrails.

The simple, one-sentence letter read: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Others who signed it were Demis Hassabis, chief executive of Google DeepMind; Dario Amodei, chief executive of San Francisco-based Anthropic PBC; and Microsoft Corp. co-founder Bill Gates.


Keep Digging

Profiles


SpotlightMore

Raghu Ravinutala, CEO and co-founder, Yellow Messenger
See More
Image via Getty
See More
SPOTLIGHT Awards
See More
Image via Getty Images
See More

Upcoming Events More

Aug
01
TBJ
Aug
22
TBJ
Aug
29
TBJ

Want to stay ahead of who & what is next? Sent twice-a-week, the Beat is your definitive look at the Bay Area’s innovation economy, offering news, analysis & more on the people, companies & ideas driving your city forward. Follow the Beat

Sign Up