According to the Center for AI Safety, artificial intelligence must be “a global priority” like pandemics and nuclear war.
A group of artificial intelligence leaders have claimed that the technology can be as risky as nuclear war itself.
The report was published by the Center for AI Safety, an organization tasked with “reducing risks on a societal scale from artificial intelligence.” Among the signatories to the letter are great personalities such as Sam Altman, the creator of ChatGPT .
The risks of AI
In this second open letter from specialists, it is stated that “mitigating the risk of extinction caused by artificial intelligence should be a global priority along with other risks on a societal scale, such as pandemics and nuclear war.”
“ Artificial intelligence experts , journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent AI risks,” it read. “Even so, it can be difficult to voice concerns about some of the more serious risks of advanced artificial intelligence . Therefore, this statement aims to overcome this obstacle and open the debate. It is also intended to create a common understanding of the growing number of experts and figures who also take seriously some of the most serious risks of this technology.”
The short letter was signed by OpenAI CEO Sam Altman, Google DeepMind director Demis Hassabis, as well as Turing Award winners Geoffrey Hinton and Yoshua Bengio.
This is the second letter from experts calling for awareness about the development of artificial intelligence.
In March, it was Elon Musk, Steve Wozniak and more than 1,000 other scientists who called for a 6-month pause in the development of the technology in order to generate legislation that regulates it and expands to the population.
The president of the United States himself, Joe Biden, is meeting with personalities in this field to “make sure it is safe before it becomes massive.”