1,100+ Industry Leaders Advocate for A Formal AI Pause
Home > Tech > Article

1,100+ Industry Leaders Advocate for A Formal AI Pause

Photo by:   CottonBro Studio
Share it!
Cinthya Alaniz Salazar By Cinthya Alaniz Salazar | Journalist & Industry Analyst - Thu, 03/30/2023 - 10:14

Advanced AI technology has immense potential to disrupt society and the economy; however, if not managed with the utmost care and resources, it could also cause irreversible damage to humanity, according to an open letter with over 1,100 signatories. With this in mind, industry leaders, experts and academics call for an immediate six-month pause to AI systems training so that safety protocols can be developed.

Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources, as stated in the widely-endorsed Asilomar AI Principles developed and established in 2017. Unfortunately, this caution has not been heeded, evidenced by the public AI arms race that has dominated news cycles since the launch of ChatGPT last November. 

Contemporary AI applications have already observably demonstrated their capacity to disrupt society and economy, generating direct and indirect change for which systems and institutions are not prepared. If unchecked, a continued AI arms race has the potential to cause an irreparable scale of damage to “to society and humanity, as shown by extensive research,” signatories warned. 

With this in mind, industry leaders, experts and academics call for an immediate six-month pause to AI systems training so that safety protocols can be developed, agreed upon and instituted by an audit authority composed of independent experts. To assure the participation of all key actors, they argue that their participation should be made public and verifiable. Furthermore, they also invite governments to step in and institute a moratorium should companies and research institutions attempt to delay and or avoid participation. 

They argue that this process is crucial to the future adaptability of public serving systems and institutions, which would otherwise be continually delegated to a reactive role in the unintended consequences produced by powerful AI systems. Nevertheless, this is not to say that AI development in general should come to a screeching halt. Rather, it should give companies the reprieve to effectively address known biases, deficiencies and security controls without the pressure of racing to commercialize. 

“These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” reads the open letter. 

This urgency comes on the heels of compounding public concern about the criminal applications of advanced AI, including Europol’s warning about the criminal manipulation of LLM’s to manufacture disinformation, fraud, propaganda and even terrorism, reads the Europol report. Overall, while all the information ChatGPT provides is freely available on the internet, the model can be used to provide specific steps therefore making it “significantly easier for malicious actors to better understand and subsequently carry out various types of crime.”

This also follows the leak of Meta’s powerful LLaMA model to imageboard website, 4chan, a week after it was announced. The leak sparked widespread concern among industry experts, who worry that the technology will be used for harm. Since then, Alfredo Ortega, an information security software engineer, has leveraged LLaMa to create a bot he named BasedGPT to compete with “woke” ChatGPT, according to reporting by VICE. 

Moreover, policymakers should take advantage of this pause to strategically accelerate the development of robust AI governance, reason the signatories. At a minimum, they outline, policymakers should dedicate new and capable regulatory authorities to AI, implement oversight and tracking of highly capable AI systems and large pools of computational capability, deploy provenance and watermarking systems to help distinguish real from synthetic and track model leaks, among other recommendations.

 

Photo by:   CottonBro Studio

You May Like

Most popular

Newsletter