According to Rishi Sunak’s AI adviser, artificial intelligence (AI) could become powerful enough to “kill many humans” in just two years.
Even short-term hazards, according to Matt Clifford, are “pretty scary,” with AI having the capacity to construct cyber and biological weapons that may kill many people.
It comes as Mr Sunak prepares to travel to the United States to persuade President Joe Biden of his “grand plan” for the United Kingdom to be at the forefront of international AI legislation.
The Prime Minister wants Britain to host an AI watchdog comparable to the International Atomic Energy Agency, and he will also propose a new international research organisation.
According to the No. 10 adviser, unless AI makers are governed on a worldwide scale, there may be “very powerful” systems that humans will struggle to control.
Mr Clifford’s involvement follows a petition signed by hundreds of top specialists, including many AI pioneers, warning that the technology’s threats should be tackled with the same seriousness as pandemics or nuclear weapons.
Senior executives from Google DeepMind and Anthropic, as well as the so-called “godfather of AI,” Geoffrey Hinton, signed the letter. Mr Hinton left Google earlier this month, saying that AI in the wrong hands might herald the end of humanity.
When asked what % possibility he thinks AI will wipe out mankind, Mr Clifford answered, “I think it is not zero.”
“If we go back to things like bio weapons or cyber [attacks], you can have really very dangerous threats to humans that could kill many humans – not all humans – simply based on where we expect models to be in two years,” he stated.
“I think the thing to focus on now is how do we make sure that we know how to control these models because right now we don’t.”