AI could make humans extinct, say top experts and CEOs

Home » AI could make humans extinct, say top experts and CEOs
AI could make humans extinct, say top experts and CEOs

There have been conflicting views on the risks to humanity posed by artificial intelligence, with some even going as far as suggesting that AI could make humans extinct. Surprisingly, however, that latter view is shared by many leading experts in artificial intelligence – including the CEOs of both OpenAI and Google DeepMind …

It’s the sort of statement you’d normally expect from conspiracy theorists living in their mom’s basement, but this one couldn’t have more impeccable credentials. The signatories are like a Who’s Who in tech generally, and AI science in particular.

Signatories include renowned academics, and – tellingly – both Sam Altman, CEO of OpenAI, and Demis Hassabis, CEO of Google DeepMind.

The warning comes in the form of a single sentence:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

A preamble does state that the statement is intended to open discussion, but also says that a growing number of experts do genuinely think the stakes could be this high.

AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.

The NY Times notes that the signatories also include two of the biggest names in AI.

Geoffrey Hinton and Yoshua Bengio, two of the three researchers who won a Turing Award for their pioneering work on neural networks and are often considered “godfathers” of the modern A.I. movement, signed the statement, as did other prominent researchers in the field (The third Turing Award winner, Yann LeCun, who leads Meta’s A.I. research efforts, had not signed as of Tuesday.)

It follows an earlier open letter calling for a six-month pause on the development of more advanced generative AI models. Signatories included Apple cofounder Steve Wozniak.

The letter says that current AI development is out of control, and may pose “profound risks to society and humanity.”

The latest letter has been described as a “coming out” for AI experts who have been expressing their concerns privately, but have until now been afraid to do so publicly for fear of ridicule. The letter provides safety in numbers and in reputation, as it would now require an even braver person to dismiss fears expressed by so many luminaries.

What’s your view? Could AI represent an existential threat to humanity? Please share your thoughts in the comments.

Image: Google DeepMind/Unsplash

FTC: We use income earning auto affiliate links. More.

Source link

Leave a Reply

Your email address will not be published.