Calamos Supports Greece
GreekReporter.comScienceAI is as Risky as Pandemics and Nuclear War, Top CEOs Warn

AI is as Risky as Pandemics and Nuclear War, Top CEOs Warn

AI risks CEO's
Medical professionals are debating the potential benefits and risks of using artificial intelligence in the healthcare sector. Credit: mikemacmarketing / Wikimedia Commons CC BY 2.0

Artificial intelligence (AI) could lead to the extinction of humanity, CEO’s – including the heads of OpenAI and Google Deepmind – have warned in a statement published on Tuesday.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement, released by California-based non-profit the Center for AI Safety, says.

Dozens of CEOs of the world’s leading artificial intelligence companies, along with hundreds of other AI scientists and experts, have signed the statement.

The CEOs of what are widely seen as the three most cutting-edge AI labs—Sam Altman of OpenAI, Demis Hassabis of DeepMind, and Dario Amodei of Anthropic—are all signatories to the letter.

Earlier in May Altman warned the US Congress that artificial intelligence developments can go quite wrong and called on the government to regulate the sector. “If this technology goes wrong, it can go quite wrong, and we want to work with the government to prevent that from happening,” said Altman.

Among the signatories is Geoffrey Hinton, a man widely acknowledged to be the “godfather of AI,” who made headlines last month when he stepped down from his position at Google and warned of the risks AI posed to humanity.

AI dangers highlighted by top CEO’s

The letter is the latest effort by those within the tech industry to urge caution on AI. In March, a separate open letter called for a six-month pause on AI development.

That letter was signed by prominent tech industry figures including Elon Musk, but it lacked sign-on from the most powerful people at the top of AI companies, and drew criticism for presenting a solution that many said was implausible.

Tuesday’s letter is different because many of its top signatories occupy powerful positions within the C-suite, research, and policy teams at AI labs and the big tech companies that pay their bills.

Kevin Scott, the CTO of Microsoft, and James Manyika, a vice president at Google, are also signatories to the letter. (Microsoft is OpenAI’s biggest investor, and Google is the parent company of DeepMind.)

Widely revered figures on the technical side of AI including Ilya Sutskever, OpenAI’s chief scientist, and Yoshua Bengio, winner of the Association for Computing Machinery’s Turing Award, are also signatories.

“AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks,” the statement reads, noting that its purpose is to “overcome this obstacle and open up discussion.”

The statement adds: “It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.”

See all the latest news from Greece and the world at Contact our newsroom to report an update or send your story, photos and videos. Follow GR on Google News and subscribe here to our daily email!

Related Posts