Major AI Labs Fall Short of Emerging Global Safety Standards
Home > AI, Cloud & Data > News Article

Major AI Labs Fall Short of Emerging Global Safety Standards

Photo by:   Free Pik
Share it!
By MBN Staff | MBN staff - Mon, 12/15/2025 - 08:20

Major AI developers Anthropic, OpenAI, xAI, and Meta do not meet emerging global safety standards for advanced AI, argues an independent assessment by the the Future of Life Institute. The evaluation concludes that none of these companies has established a comprehensive strategy to control highly capable systems under development.

"Despite recent uproar over AI-powered hacking and AI driving people to psychosis and self-harm, US AI companies remain less regulated than restaurants and continue lobbying against binding safety standards," says Max Tegmark, Professor, Massachusetts Institute of Technology, and President, Future of Life Institute.

The institute examined the safety practices of companies that develop high-capacity models designed to perform advanced reasoning tasks. According to the organization, the results show that safety controls have not evolved at the same pace as investments and deployment efforts.

The institute was founded in 2014 and has consistently expressed concern about risks associated with systems capable of outperforming human cognitive processes. In previous years, the discussion surrounding governance frameworks intensified following publicized cases of self-harm correlated with chatbot interactions.

Concerns about oversight also coincide with a more fragmented technological landscape. Multiple providers are building autonomous agents without shared interoperability protocols, which limits the ability of companies to execute coordinated, cross-platform workflows in high-volume enterprise environments.

The assessment says that the four companies lack mature processes for oversight, risk management, stress testing, and containment of systems that aim to reach superintelligence thresholds. The findings stand in contrast to discussions in regulatory forums in the United States, the United Kingdom, and the European Union, where governments have examined alternative models for mandatory safety controls.

The report was released shortly after OpenAI, Anthropic, and Block established the Agentic AI Foundation under the governance framework of the Linux Foundation. The new organization focuses on developing open technical standards for next-generation autonomous agents used in enterprise environments. Its initial membership includes Google, Microsoft, AWS, Bloomberg, and Cloudflare, which indicates a coordinated effort by major US technology companies to influence the architecture of automated systems.

To support the foundation, the three founding companies transferred several technologies to the organization. These include the Model Context Protocol originally developed by Anthropic, the Agents.md specification environment contributed by OpenAI, and Goose, a framework created by Block to execute actions through computer interfaces. Jim Zemlin, Executive Director, Linux Foundation, says that these tools have become essential for developers building agentic technologies and that open governance enables sustained collaboration.

The emphasis on open standards is also shaped by competitive dynamics. In China, companies such as DeepSeek, Alibaba, Moonshot AI, and Z.ai have gained traction with advanced open-source models that have broad adoption among startups and researchers. According to the Future of Life Institute, US companies face pressure to accelerate innovation cycles, which may limit the time allocated to safety validation.

Geoffrey Hinton and Yoshua Bengio, both signatories of a public call for the temporary suspension of superintelligence development, argue that research should pause until there is both public consensus and a scientific pathway for secure deployment. The institute says that the absence of binding regulatory frameworks reduces incentives for companies to implement oversight mechanisms beyond voluntary controls.

Photo by:   Free Pik

You May Like

Most popular

Newsletter