Artificial Intelligence (AI) experts have expressed fear that the technology could put the future of humanity at risk.
Some of the biggest names in the development of AI have, therefore, warned world leaders to do enough to ‘mitigate the extinction,’ according to a report by ITV.
In a short statement, business and academic leaders said the risks from AI should be treated with the same urgency as “pandemics or nuclear war.”
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” they said.
The statement was organised by the Centre for AI Safety, a San Francisco-based non-profit that aims “to reduce societal-scale risks from AI.”
It said the use of AI in warfare could be “extremely harmful” as it could be used to develop new chemical weapons and enhance aerial combat.
The list included dozens of academics, senior bosses at companies like Google DeepMind, the co-founder of Skype, and the founders of AI company Anthropic.
AI is now in global consciousness after several firms released new tools, such as ChatGPT, allowing users to generate text, images, and even computer code by just asking for what they want.
A letter was signed by tech bosses including Geoffrey Hinton, who is sometimes nicknamed the “Godfather of AI”.
Earlier this year, he quit Google and warned how dangerous the future of the technology could be.
For more than a decade he worked on helping develop software which paved the way for AI systems such as ChatGPT.
He previously told the New York Times he regretted his work and said “bad actors” would use new AI technologies to harm others and could spell the end of humanity.
Sam Altman and Ilya Sutskever, the chief executive and co-founder respectively of ChatGPT- developer OpenAI also signed the letter.
Just weeks ago, Mr. Altman told US politicians that government intervention “will be critical to mitigating the risks of increasingly powerful” AI systems.
Speaking before Congress, Sam Altman said: “As this technology advances, we understand that people are anxious about how it could change the way we live. We are too.”
He proposed the formation of a US or global agency that would license the most powerful AI systems and have the authority to “take that license away and ensure compliance with safety standards.”