Elon Musk and a group of tech experts are calling for a six-month pause in the training of advanced artificial intelligence models following ChatGPT’s rise – arguing the systems could pose “profound risks to society and humanity.”
The CEO of Twitter and Tesla joined more than 1,000 experts in signing an open letter organized by the nonprofit Future of Life Institute, which is primarily funded by the Musk Foundation.
The group also gets funds from the Silicon Valley Community Foundation and the effective altruism group Founders Pledge, the European Union’s transparency register shows.
The letter details potential risks that advanced AI poses without proper oversight and calls for an industrywide pause until proper safety protocols have been developed and vetted by independent experts.
Risks include the spread of “propaganda and untruth,” job losses, the development of “nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us,” and the risk of “loss of control of our civilization.”
The experts pointed out that OpenAI itself recently acknowledged it may soon be necessary to “get independent review before starting to train future systems.”
“Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” the letter says. “This pause should be public and verifiable, and include all key actors.”
Musk was a co-founder and early investor in OpenAI, the firm responsible for the development of ChatGPT. He has since left OpenAI’s board of directors and no longer has any involvement in its operations.
Shivon Zilis, an AI expert who gave birth to twins fathered by Musk via in vitro fertilization, also recently stepped down from OpenAI’s board. She had served as an adviser to OpenAI since 2016. Zilis, 37, is an executive at Neuralink, Musk’s brain chip company.
Despite his self-proclaimed fears about AI, Musk is reportedly exploring the possibility of developing a rival to ChatGPT, according to NY Post.
Microsoft-backed OpenAI’s ChatGPT-4, the latest version of its AI chatbot, has both shocked the public with its ability to generate life-like responses to a huge variety of prompts and stoked fears that AI will place many jobs at risk and ease the spread of misinformation.
Other notable signers of the letter include Apple co-founder Steve Wozniak, Pinterest co-founder Evan Sharp and at least three employees affiliated with DeepMind, an AI research lab owned by Google parent Alphabet.
OpenAI’s CEO Sam Altman has not signed the letter.
Active AI labs and experts “should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” to ensure the systems are “safe beyond a reasonable doubt,” the letter adds.
“Such decisions must not be delegated to unelected tech leaders,” the letter says. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
“This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” the letter adds.
Elon Musk has repeatedly warned about the danger posed by the unrestrained development of AI technology – describing it last month as “one of the biggest risks to the future of civilization.”
Musk likened AI to the discovery of nuclear physics, which led to “nuclear power generation but also nuclear bombs.”
“I think we need to regulate AI safety, frankly,” Musk said. “Think of any technology which is potentially a risk to people, like if it’s aircraft or cars or medicine, we have regulatory bodies that oversee the public safety of cars and planes and medicine.”
“I think we should have a similar set of regulatory oversight for artificial intelligence, because I think it is actually a bigger risk to society,” he added.