Health Systems Need to Build Trust in AI Technologies: WEF
As AI continues to reshape global healthcare, its responsible adoption will hinge on regulatory adaptation, capacity-building, and collaborative frameworks, according to a new report by the World Economic Forum and Boston Consulting Group.
The Earning Trust for AI in Health: A Collaborative Path Forward report identifies three key priorities to scale AI responsibly in healthcare systems: strengthening technical literacy among stakeholders, adapting regulatory frameworks, and fostering public-private partnerships. These measures are aimed at embedding trust and ensuring equitable access to innovation.
AI technologies offer potential to improve outcomes and address challenges such as rising costs, workforce shortages, and systemic inefficiencies. However, traditional regulatory models, originally developed for static medical products, struggle to accommodate AI’s evolving nature. Systems often emphasize pre-market validation, yet many AI applications continue to learn and change post-deployment. This creates a mismatch between innovation and oversight.
To address this, the report recommends investing in the technical capacity of regulators, healthcare leaders, and developers. A deeper understanding of AI’s design and behavior is necessary for effective oversight and integration into health systems.
The report also calls for the adoption of more flexible regulatory models. These may include guidelines that supplement legislation, regulatory sandboxes for real-world testing, and life-cycle monitoring to assess AI systems after implementation. Programs like the Testing and Experimentation Facility for Health AI and Robotics (TEF-Health) are cited as examples of dynamic, independent environments that can help evaluate AI technologies beyond the pre-market phase.
Public-private collaboration is also critical, says the report. Rather than serving as informal consultation spaces, governments and industries should co-develop standards, monitoring tools, and quality assurance mechanisms to accelerate regulatory responsiveness while ensuring patient safety.
The report warns that without coordinated global action, fragmented approaches may hinder AI’s impact and exacerbate health inequities, particularly between higher and lower-resourced regions. The authors stress that capacity-building efforts in underfunded health systems must accompany innovation to avoid widening the digital divide. Ultimately, the report advocates for regulatory systems that are not only robust but also adaptable.









