Elon Musk, Wozniak, and other tech innovators and AI experts have signed an open letter asking for a temporary pause of about six months in the development of AI systems, which are more advanced and powerful than OpenAI’s GPT-4 (Generative Pre-trained Transformer 4).
The pause is issued citing risks towards society and civilization such as propaganda and lies through AI-generated articles, AI programs beginning to outperform human works and making jobs obsolete, and more.
- More than 1000 tech leaders including Tesla CEO Elon Musk and AI experts are asking for a six-month pause on the development of AI systems that are more advanced and powerful than OpenAI’s recently launched, GPT-4, warning of risks to society and civilization.
- The letter states the undersigned tech experts issue propaganda and lies through AI-generated articles which can cause risk to society spreading inaccurate information.
- Sam Althman’s name is missing from the letter and his comments suggest no plans to call any halt to AI development
Tech giants like Elon Musk, Steve Wozniak, and other artificial intelligence experts have issued an Open letter stating to pause in the development of new AI systems.
The letter was first issued by the Future of Life institute just a few weeks after OpenAI released its fourth edition of the GPT program, “GPT-4,” which created a lot of buzz in the market, especially with its capability to deliver human-level performances in professional and academic benchmarks.
The release of OpenAI’s GPT-4 contains the capability of passing several advanced placement exams including the Uniform Bar exam, LSAT, GRE, and more. GPT-4 was even able to score 90th percentile in the Uniform Bar exams for aspiring lawyers.
Pause of six months in development of AI systems more powerful than GPT-4
The Open letter asks AI developers to “instantly pause the development and training of AI systems which are more powerful than its recently launched GPT-4 for six months.” More than 1000 people have already signed the letter, including Elon Musk.
The letter states, “More powerful AI systems than GPT-4 should only be produced once the AI developers are assured of the outcomes and are convinced the potential risks could be managed without any risks or concerns.”
In addition, the letter also includes a warning stating that no one “can predict, understand or reliably control” a new tool that is powered in AI labs. The undersigned tech experts issue the risks of inaccuracy, lies, and propaganda rolling through these AI-generated articles, which appear to look exactly real.
This AI-generated content also raises the possibility of AI programs beginning to outperform human workers and might result in making jobs obsolete.
Independent experts and Artificial intelligence labs should use this six months pause to develop and produce a list of shared safety protocols for a progressive AI design and development and are attentively audited and supervised by independent outside experts.
Apart from this policymakers is another thing suggested in the OpenAI letter which AI developers must work on in order to fasten the development process of a vigorous AI governance system.
The signatories comprise Stability AI CEO, Emad Mostaque, AI heavyweights Stuart Russell, and Yoshua Bengio along with researchers of DeepMind-owned by Alphabet highlighting the pause should not be inducted, mentioning a temporary pause of six months will be “A step back from a difficult race to an ever-large unanticipated black-box model with emergent abilities.”
New York University Professor, Gary Marcus, who signed the letter stated, “the letter isn’t exactly perfect, however, the spirit is correct: AI development should be slowed down until we gain a better understanding of the ramifications.
Since, ChatGPT’s release last year, OpenAI has prompted rivals to encourage the development of large language models and businesses including Google’s Alphabet are in the race to raise their products in AI.
OpenAI’s CEO Sam Altman’s name is still missing in the OpenAI letter as of Wednesday, stating the company takes the essential measure for AI safety, “I suppose we have been speaking about the safety and security issues for the loudest for the longest”