Microsoft Unveils AI Principles to Balance Innovation, Security
Home > AI, Cloud & Data > Article

Microsoft Unveils AI Principles to Balance Innovation, Security

Photo by:   Unsplash
Share it!
Diego Valverde By Diego Valverde | Journalist & Industry Analyst - Wed, 03/05/2025 - 11:10

Microsoft introduced 11 guiding principles for the access and regulation of AI, with the aim of democratizing the technology and reinforcing security in its implementation. These guidelines were presented at the Mobile World Congress 2024, held in Barcelona.

"We are committed to continuous and innovative steps to make the AI models we host and the development tools we create widely available to AI software application developers around the world in a way that is consistent with the principles of responsible AI," writes Brad Smith, President, Microsoft, in the announcement shared by the company.

2024 has been a year marked by the exponential growth of AI and the rise of sophisticated cyberattacks. As the Microsoft Digital Protection Report 2024 points out, the company's customers have been targeted by more than 600 nation-state type threat groups, 300 cybercrime groups, and 200 influence operation groups. In this scenario, technology giants have had to strengthen their strategies to balance access to AI with security and compliance measures.

Microsoft has deployed several actions to address cybersecurity needs. For example, it partnered with OpenAI to develop advanced models, investing US$5.6 billion in data centers and AI training programs in the European Union. With these commitments, the company seeks to ensure that AI innovation is accessible to global developers, allowing each nation to build its own digital economy based on this technology.

Microsoft's AI Access Principles

The 11 principles set out by Microsoft, which seek to ensure the sound governance of AI, cover key aspects such as infrastructure, accessibility, interoperability, security, and accountability. First, the company will expand its cloud computing infrastructure to support both proprietary and open source AI models. This will enable developers and enterprises to access advanced tools more easily, promoting innovation across different industries.

"By providing a unified platform for managing AI models, our goal is to reduce the barriers and costs of developing AI models worldwide for both open source and proprietary development," reads Microsoft's statement.

Microsoft will facilitate global access to its AI models and development tools, giving programmers the ability to create AI-based applications without geographic restrictions. As part of this commitment, the company will continue to offer public APIs through Microsoft Azure, ensuring cross-platform compatibility and promoting an open and interoperable ecosystem.

A key aspect of this strategy is interoperability with mobile networks. Microsoft supports the GSMA's Open Gateway initiative, which enables mobile network operators to offer advanced capabilities to software developers. This integration will foster the creation of new applications and services that take full advantage of the capabilities of mobile networks and AI.

Microsoft will also grant flexibility to developers to commercialize their AI models. Creators will be able to choose to sell their solutions in the Azure Marketplace or establish direct agreements with their customers, ensuring a business model more adaptable to their needs. In addition, the company is committed to not using non-public information generated by developers in Azure to compete against them.

Another key element is data portability. Microsoft will ensure that customers can export and transfer their information to other cloud providers without restrictions, facilitating data mobility and avoiding dependence on a single ecosystem.

Microsoft also committed to ensure security throughout the development of its AI models and applications hosted in its data centers. "The infrastructure that underpins these solutions requires rigorous protection against physical and digital threats," says Microsoft.

The company says it employs advanced encryption, authentication, and authorization mechanisms to protect data both in transit and at rest, ensuring the integrity and confidentiality of AI models. The company also leverages AI to strengthen its cybersecurity, using advanced models that detect and mitigate cyberattacks, identify vulnerabilities, and strengthen the resilience of its systems.

To consolidate these measures, Microsoft has launched the Secure Future Initiative (SFI), a comprehensive strategy that encompasses three key pillars: AI-based cyber defenses, advances in fundamental software engineering, and the promotion of stricter international standards to protect civilians against cyberthreats.

Microsoft will support responsible AI development by applying standards based on principles of fairness, trustworthiness, security, privacy, transparency, and inclusion. These guidelines seek to mitigate risks and ensure that technology is used in a way that is ethical and beneficial to society. The company will drive AI skills training programs globally, providing training opportunities for workers in various industries and fostering a workforce prepared for digital transformation.

Finally, Microsoft will allocate resources to initiatives that promote responsible innovation in AI. This investment will enable the development of new solutions that meet high ethical standards and contribute to the sustainable growth of technology in different sectors

"We know that the principles that govern our approach are only a first step. We expect that we will need to evolve these principles and our approach as technology and the AI industry advance and applicable laws and regulations change," writes Smith. "We look forward to continuing the dialogue with the many stakeholders who are now playing critical roles in building the new AI economy. If experience teaches us anything, it is that we all need to succeed together."

Photo by:   Unsplash

You May Like

Most popular

Newsletter