The AI task force advisor to the prime minister in the United Kingdom said humans have roughly two years to control and regulate artificial intelligence (AI) before it becomes too powerful.
In an interview with a local UK media outlet Matt Clifford, who also serves as the chair of the government’s Advanced Research and Invention Agency (ARIA), stressed that current systems are getting “more and more capable at an ever-increasing rate.”
He continued to say that if officials don’t start considering safety and regulations now, in two years’ time the systems will become“very powerful.”
“We’ve got two years to get in place a framework that makes both controlling and regulating these very large models much more possible than it is today.”
Clifford warned that there are “a lot of different types of risks” when it comes to AI, both near-term and long-term ones, which he called “pretty scary.”
The interview came following a letter published by the Center for AI Safety the previous week, which was signed by 350 AI experts, including the CEO of OpenAI, that said AI should be treated as an existential threat similar to that posed by nuclear weapons and pandemics.
“They’re talking about what happens once we effectively create a new species, sort of an intelligence that’s greater than humans.”
The AI task force advisor said that these threats posed by AI could be “very dangerous” ones that could “kill many humans, not all humans, simply from where we’d expect models to be in two years’ time.”
According to Clifford, the main focus of regulators and developers should be to focus on understanding how to control the models and then implementing regulations on a global scale.
For now, he said his greatest fear is the lack of understanding of why AI models behave the way they do.
“The people who are building the most capable systems freely admit that they don’t understand exactly how [AI systems] exhibit the behaviors that they do.”
Clifford highlighted that many of the leaders of organizations building AI also agree that powerful AI models must undergo some type of audit and evaluation process prior to their deployment.
Currently, regulators around the world are scrambling to both understand the technology and its ramifications, while trying to create regulations that protect users and still allow for innovation.
On June 5, officials in the European Union went so far as to suggest mandates that all AI-generated content should be labeled as such in order to prevent disinformation.
In the UK a minister in the opposition party echoed the sentiments mentioned in the CAIS letter, saying that the technology should be regulated as are medicine and nuclear power