Advertisement
Advertisement

Elon Musk and others urge AI pause, citing ‘risks to society’

By:
Reuters
Updated: Apr 5, 2023, 12:30 GMT+00:00

(Reuters) - Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in training of systems more powerful than GPT-4, they said in an open letter, citing potential risks to society and humanity.

ONS 2022 oil and gas conference

(This March 29 story has been corrected to to show that the Musk Foundation is a major, not the primary donor to FLI, in paragraph 4)

By Jyoti Narayan, Krystal Hu, Martin Coulter and Supantha Mukherjee

(Reuters) – Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society.

Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users by engaging them in human-like conversation, composing songs and summarising lengthy documents.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” said the letter issued by the Future of Life Institute.

The Musk Foundation is a major donor to the non-profit, as well as London-based group Founders Pledge, and Silicon Valley Community Foundation, according to the European Union’s transparency register.

“AI stresses me out,” Musk said earlier this month. He is one of the co-founders of industry leader OpenAI and his carmaker Tesla uses AI for an autopilot system.

Musk, who has expressed frustration over regulators critical of efforts to regulate the autopilot system, has sought a regulatory authority to ensure that development of AI serves the public interest.

“It is … deeply hypocritical for Elon Musk to sign on given how hard Tesla has fought against accountability for the defective AI in its self-driving cars,” said James Grimmelmann, a professor of digital and information law at Cornell University.

“A pause is a good idea, but the letter is vague and doesn’t take the regulatory problems seriously.”

Tesla last month had to recall more than 362,000 U.S. vehicles to update software after U.S. regulators said the driver assistance system could cause crashes, prompting Musk to tweet that the word “recall” for an over-the-air software update is “anachronistic and just flat wrong!”

‘OUTNUMBER, OUTSMART, OBSOLETE’

OpenAI didn’t immediately respond to a request for comment on the open letter, which urged a pause on advanced AI development until shared safety protocols were developed independent experts and called on developers to work with policymakers on governance.

“Should we let machines flood our information channels with propaganda and untruth? … Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” the letter asked, saying “such decisions must not be delegated to unelected tech leaders.”

The letter was signed by more than 1,000 people including Musk. Sam Altman, chief executive at OpenAI, was not among those who signed the letter. Sundar Pichai and Satya Nadella, CEOs of Alphabet and Microsoft, were not among those who signed either.

Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI”, and Stuart Russell, a pioneer of research in the field.

The concerns come as ChatGPT attracts U.S. lawmakers’ attention with questions about its impact on national security and education. EU police force Europol warned on Monday about the potential misuse of the system in phishing attempts, disinformation and cybercrime.

Meanwhile, the UK government unveiled proposals for an “adaptable” regulatory framework around AI.

Ai race

“The letter isn’t perfect, but the spirit is right: we need to slow down until we better understand the ramifications,” said Gary Marcus, a professor at New York University who signed the letter.

“The big players are becoming increasingly secretive about what they are doing, which makes it hard for society to defend against whatever harms may materialize.”

Since its release last year, OpenAI’s ChatGPT has prompted rivals to accelerate developing similar large language models and companies including Alphabet Inc are racing to steep their products in AI.

Investors, wary of relying on a single company, are embracing competitors to OpenAI.

Microsoft declined to comment on the letter and Alphabet did not respond to calls and emails for a comment.

“A lot of the power to develop these systems has been constantly in the hands of few companies that have the resources to do it,” said Suresh Venkatasubramanian, a professor at Brown University and former assistant director in the White House Office of Science and Technology Policy.

“That’s how these models are, they’re hard to build and they’re hard to democratize.”

(Reporting by Jyoti Narayan in Bengaluru, Krystal Hu in New York, Martin Coulter in London, and Supantha Mukhurjee in Stockholm; Additional reporting by Aditya Soni and Jeffrey Dastin; Writing by Sayantani Ghosh; Editing by Gerry Doyle, Elaine Hardcastle and Deepa Babington)

About the Author

Reuterscontributor

Reuters, the news and media division of Thomson Reuters, is the world’s largest international multimedia news provider reaching more than one billion people every day. Reuters provides trusted business, financial, national, and international news to professionals via Thomson Reuters desktops, the world's media organizations, and directly to consumers at Reuters.com and via Reuters TV. Learn more about Thomson Reuters products:

Advertisement