Elon Musk and a group of artificial intelligence experts have called for a moratorium on training powerful artificial intelligence systems due to potential risks to society and humanity.
The letter, published by the nonprofit Future of Life Institute and signed by more than 1,000 people, warns that AI systems competing with humans pose potential risks to society and civilization in the form of economic and political disruption.
“AI systems with human-competing intelligence could pose profound risks to society and humanity,” the letter warns.
“Robust AI systems should only be developed if we are confident that their effects are positive and their risks are manageable.”
It calls for an end to six-month “dangerous race” to develop systems more powerful than OpenAI The newly launched GPT-4.
If such a moratorium cannot be implemented quickly, the letter said the government should step in and suspend.
“AI labs and independent experts should use this moratorium to jointly develop and implement a set of shared security protocols for the design and development of advanced AI, subject to rigorous audit and oversight by independent external experts,” the letter said.
“These protocols should ensure that systems that comply with them are secure beyond doubt.”
Some Twitter users will lose their blue ticks starting next month
A Sky News presenter can now read our articles to you – here’s how
The letter was also written by Apple co-founder Steve Wozniak, Joshua Benigo, often called one of the “godfathers of artificial intelligence,” Stuart Russell, a pioneer in research in the field, and Alphabet-owned Signed by researchers at DeepMind. .
The Future of Life Institute is largely funded by the Musk Foundation, the London-based effective altruism group Founders Pledge, and the Silicon Valley Community Foundation, according to the EU’s Transparency Register.
Musk has been outspoken about his concerns about artificial intelligence. His automaker, Tesla, uses artificial intelligence for its self-driving system.
Since its release last year, Microsoft-backed OpenAI’s ChatGPT has prompted competitors to accelerate the development of similarly large language models and encouraged companies to integrate generative AI models into their products.
UK unveils proposals for ‘light touch’ regulation around AI
When the British government announced Proposals for a “light touch” regulatory framework around artificial intelligence.
A policy paper outlining the government’s approach will distribute responsibility for managing AI among its human rights, health and safety and competition watchdogs, rather than creating a new agency dedicated to the technology.
Meanwhile, earlier this week, Europol joined ethical and legal concerns over advanced artificial intelligence such as ChatGPT, warning that the system could be misused for phishing, disinformation and cybercrime.