Home > Health > Expert Contributor

Ethical AI in Healthcare: Designing for Impact in Mexico

By Cristina Campero Peredo - Prosperia
CEO

STORY INLINE POST

Cristina Campero Peredo By Cristina Campero Peredo | CEO - Mon, 06/09/2025 - 07:00

share it

Artificial intelligence in healthcare is more than a technological shift, it is an ethical responsibility. In countries like Mexico, where structural inequalities shape access to care, AI must be designed not just to be efficient but to be fair. At retinIA, we focus on preventing blindness through early detection of eye diseases like diabetic retinopathy using AI. Our experience screening over 100,000 patients in Mexico has shown that the biggest challenge isn't technical. It's ethical: how to build tools that reflect the realities and needs of those most often left behind by traditional healthcare systems.

To meet that challenge, AI must be developed and deployed with a commitment to three foundational principles: equity, transparency, and privacy. These pillars are essential to ensure that AI systems address — not reinforce — the inequities in our healthcare system. Companies must design with these values at the core, not as afterthoughts, if we want AI to be part of the solution in Mexico and beyond.

  • Justice and Equity

AI should reduce gaps, not widen them. This begins by identifying where health disparities are most acute and ensuring those realities are reflected in the data and design. In our case, we deliberately source our data from a range of geographic, age, and socioeconomic backgrounds across Mexico. We audit model performance by subgroups to ensure we aren’t inadvertently failing the very populations we aim to serve. If we find higher false-negative rates in a group, we act.

This work is slow, but essential. Because underserved populations are often underrepresented in medical datasets, they require intentional inclusion. And because their needs are different, the product must adapt. Offering automated retinal screening in remote clinics doesn’t just fill a clinical gap, it is an ethical imperative. 

  • Transparency, Responsibility, and Consent

Transparency isn’t a feature, it’s a foundation. Patients and doctors deserve to understand what an AI model does, how it does it, and where its limits lie. For example, even though our system outperforms first-year specialists in identifying risk, we deliberately limit its scope to preserve clinical judgment and patient trust. retinIA is not a diagnostic tool, it’s a screening solution designed to flag risk and refer patients to the appropriate specialist. Diagnosis and treatment decisions remain with clinicians. This ensures not just regulatory compliance but preserves the human layer of empathy, communication, and emotional care that is fundamental in healthcare.

In a country where formal guidance for AI in medicine is still evolving, we chose to self-regulate as an act of responsibility toward the users who trust our tools. Aligning to global standards — validating the tool rigorously, implementing traceability, and following norms like the GDPR and FDA frameworks — is our way of showing that user protection isn't negotiable. We also prioritize informed consent: Patients and clinicians deserve to know not only how their data is handled and why they are being screened, but also how these tools fit into the larger care process. This level of transparency is critical for building the trust required to responsibly integrate AI into real-world healthcare.

This proactive approach builds trust, which must be earned through actions, not assumed through branding. In healthcare, trust stems from clarity, consistency, and humility. That’s why responsibility toward users must go beyond the technical. One way we do this is through the design of how we present our result, which is crafted not only for healthcare professionals but also includes actionable messages for patients. This empowers individuals to better understand their own health and take part in decision-making, reinforcing trust in the system and ensuring the technology is truly serving those it’s meant to help.

  • Privacy and Security

Data protection is not just a compliance issue, it is a matter of dignity. Managing health data responsibly requires the highest levels of encryption, anonymization, and restricted access, with systems built to align fully with Mexico’s personal data protection law (LFPDPPP) and international standards.

In practice, this means building privacy into the design of our systems: separating identifiers, using secure cloud infrastructure, and creating audit trails. But it also means recognizing the social weight of medical data, particularly for marginalized groups who may have historical reasons to distrust institutions. Earning their trust requires going beyond legal checkboxes. It requires demonstrating, day after day, that their information is safe.

To truly uphold the principles of equity, transparency, and privacy in AI, we must also build the right teams. Responsible AI cannot be developed in a vacuum. Teams that bring together diverse perspectives — across disciplines, regions, and lived experiences — are better equipped to detect blind spots, challenge assumptions, and design technology that works for a broader population.

Diversity is not just a value, it’s a critical input for ethical decision-making. By including a wider range of voices in the development process, we can better anticipate unintended consequences, validate tools across real-world contexts, and stay grounded in the needs of those most often excluded from innovation.

At PROSPERiA, we prioritize building teams with this in mind. We actively seek individuals from different geographies, educational pathways, and professional backgrounds. This inclusive approach helps ensure that our AI tools are not just technically sound, but socially responsible and aligned with the lived realities of our communities. If we limit who gets to build AI, we limit who benefits from it. But if we empower diverse voices and design with the goal of closing systemic gaps, AI can become one of the most powerful tools we have to create a more just and equitable future.

At the same time, Mexico has an extraordinary pool of talent capable of leading the global conversation on ethical and impactful AI. What’s needed is the right scaffolding: clear regulation, inclusive education, strong public-private collaboration, and sustained support for mission-driven startups. This is especially critical when it comes to deploying AI to bridge, rather than deepen, the healthcare and social gaps that persist across the country.

You May Like

Most popular

Newsletter