The government has unveiled plans for how it will regulate AI technology, saying it will “accelerate” the development of AI in the UK while tackling the potential risks to society of rapidly emerging computer intelligence.
These regulations will apply to all applications of artificial intelligence, including powerful “language models” such as the eye-catching Chat-GPT and image generation software such as Midjourney AI.
The ability of these algorithms to pass exams and write poems, as well as to generate misinformation and false images, arouses awe and anxiety in equal measure.
“We do not deny that there are risks,” said Science, Innovation and Technology Minister Michelle Donelan. “That’s why we have a commensurate framework for this approach to regulation, a framework that can help the UK seize opportunities.”
Ms Donelan spoke to Sky News during a visit to DeepMind, a British artificial intelligence company now owned by Google, which used its AlphaFold AI to solve the structure of nearly every known protein last year. The development is a landmark moment in understanding biology and could lead to faster, safer drug development.
AI has enormous potential to increase business productivity, improve learning and access to public services, and revolutionize healthcare. The government claims the industry was worth £3.7bn to the UK economy last year.
it hopes to provide artificial intelligence Companies have a regulatory environment with less legal and administrative red tape than competing economies.
So, it didn’t propose new laws. Instead, it wants existing regulators, such as the Health and Safety Executive and the Competition and Markets Authority, to apply key principles of safety, transparency and accountability to emerging AI.
In a very Silicon Valley-sounding move, the government is even offering a £2m “sandbox” for AI developers to test how regulation can be applied to AI before bringing it to market.
But is a “light-touch” approach to regulation a mistake in the face of looming fears that AI could spin out of control or be misused?
There have been examples of text- and image-based AI being able to generate misinformation, such as completely false images of arrests followed by Donald Trump’s triumphant escape; or the pope in a white down jacket.
Not to mention AI being used by hackers or crooks to code computer viruses or to peddle more convincing online scams.
Faced with this situation, the EU has proposed strong AI legislation and a “risk-based” approach to AI regulation.
Spotify’s redesign isn’t going well
‘It means every day is the best day in surgery’: Robotic arm assists knee replacement
‘If we legislate now it will be obsolete’
The UK government has made the not unreasonable point that it is difficult to know what AI law should say because we don’t know what tomorrow’s AI will look like.
“If we legislate now, it will be obsolete,” Ms Donelan said. “We want a process that can be nimble, agile, accountable, that can prioritize security, that can prioritize transparency, but that can keep pace with the changes that are happening in the industry.”
The government said it did not rule out future legislation to regulate AI, in Donilan’s unabashed attempt to make the UK attractive to AI companies.
“Shouldn’t the UK take the lead? Shouldn’t we be securing benefits for our NHS public service or our transport network education system?” she said.
But governments have proven difficult in protecting the privacy and safety of children online. When it comes to artificial intelligence, its regulatory battle with Big Tech may just be beginning.
“many [Big Tech companies] Professor Anil Seth, a cognitive scientist at the University of Sussex, said: “To me it seems like a genuine desire to do what’s best for humanity. Unfortunately, that’s not how markets work and companies will rewarded for its share price.”
Many experts point out that Google is eager to release its AI chatbot Bard, and Microsoft has built OpenAI’s GPT4 language model into its Bing search engine, and the two sides are currently engaged in fierce competition.
These tools are capable of simulating and interpreting natural human language, or “understanding” images so well that even their developers seem unsure how to use them. However, they are publicly released for us to try. A laudable open and transparent way of introducing AI to the world, or a recipe for disaster?
“Good intentions are not enough,” Professor Seth said. “We really need good intentions and sensible and enforceable regulation.”