Home > Health > Expert Contributor

AI in Healthcare: Airbag Path or Asbestos Risk?

By Waleed Mohsen - Verbal
Founder & CEO

STORY INLINE POST

Waleed Mohsen By Waleed Mohsen | Founder and CEO - Wed, 11/05/2025 - 07:00

share it

AI in healthcare is coming. Actually, it's already here, and in more places than most people even realize.

And I'm convinced that's a win. AI tools are streamlining administrative work that wastes clinician time, improving care quality, lowering costs, and making more care accessible for more people.

Healthcare needs transformation, and AI is delivering it.

But even as the CEO of an AI healthcare company, I think we're moving fast. Very fast. 

And while the potential benefits are immense, we can't let our excitement about what AI can do blind us to its risks, especially in healthcare.

The question isn't whether AI belongs in healthcare. It’s which path we choose to take.

Will we take the “airbag path,” based on rigorous safeguards, testing, and independent oversight?

Or will we take the “asbestos path,” adding AI everywhere because it works, it’s cheap, and we can’t see anything going wrong (at least not yet)?

One path gives us all the benefits of AI in healthcare while keeping us vigilant about its risks and putting patient safety first. The other prioritizes speed, savings, and FOMO, and risks leaving us in a position where we realize too late that we've created systemic risks we can't easily extract.

The Perils of the Asbestos Path

For most of the 20th century, asbestos was widely seen as something of an industrial "miracle material." 

There was simply nothing like it: it was fireproof, wouldn't corrode from chemicals, didn't conduct electricity. It was strong enough to reinforce cement but flexible enough to weave into fabrics. There was plenty to go around, and best of all: it was cheap.

So it was everywhere. Shingles and tiles, insulation, brake pads, and oven mitts. Even fake Christmas snow.

By the time we realized it was dangerous, asbestos was pervasive. Removing it was a huge challenge, and it continued to do damage in the meantime.

AI seems to be following a similar path. 

Since ChatGPT exploded onto the scene, AI features are popping up in more and more domains, hooking more and more users, taking on more and more responsibility. AI tools can write, code, paint, chat with customers, optimize stock trades, tutor our kids. And they can do it 24 hours a day, seven days a week, all at a fraction of the cost.

Now it's transforming healthcare, playing a role in everything from documentation and billing to scheduling, diagnostics, and clinical decision support. AI scribes listen to patient encounters and generate notes. Chatbots triage patient concerns. AI agents tackle prior authorizations and insurance denials, collaborate with providers on diagnoses and treatment plans, and even interface directly with patients during virtual visits.

These tools are genuinely useful. But that’s the problem. They’re so useful, it’s hard to resist adding AI everywhere.

Like asbestos, AI is versatile. Like asbestos, AI is cost-effective. And as was once true with asbestos, its risks aren't always immediate or obvious.

This creates a perfect storm of safety risk in healthcare:

We can't measure what we don't track. Few organizations are able to systematically identify and track negative outcomes, errors, or biases at scale. With only a fraction of medical documentation and interactions being audited, AI scribe hallucinations and AI agent medical errors can easily be missed, with risk compounding quietly across thousands of patients.

Low error rates mask the scale of risk. In a recent study, AI transcriptions were estimated to have a 1.4% hallucination rate, but nearly 40% of those hallucinations were harmful or concerning in some way. That sounds small, but in a large health system, that could mean thousands of daily patient interactions where harmful misinformation reaches clinical records and influences care decisions.

Once embedded, AI becomes difficult to extract. One of the main benefits of AI in healthcare is expanded access and efficiency: more time, more patients getting care. But with more interactions comes more risk. Healthcare organizations are famously resistant to disrupting workflows, which means once AI is folded in, removing it becomes all the more difficult, even if problems emerge later.

So far, organizations have had to rely on AI vendors to self-regulate, attesting to their own accuracy and safety. Even if the AI companies try their best, the conflict of interest is obvious. We're essentially asking manufacturers to police themselves while we embed their products deeper into clinical workflows.

Obviously AI and asbestos are not the same thing: Asbestos’ danger was inherent and its upside limited. There’s no safe way to use it, and cheaper insulation isn’t worth people getting sick.

AI, on the other hand, is not only always getting better, but also offers unmatched potential benefits for patients and providers alike. 

That’s why the answer isn’t “No more AI.” It’s thoughtful implementation, ongoing assessment, and a commitment to continuous improvement.

The Promise of the Airbag Path

AI is the way forward in healthcare, one way or the other. There’s simply too much potential for higher-quality, lower-cost care and too high a market incentive to stop the train now. 

But we need to keep in mind that healthcare is not a normal domain for AI deployment. Healthcare is where the stakes are highest. Errors don't just cost money, they cost lives. 

That’s why healthcare AI needs to meet the highest safety standards, even if that means slower adoption or a bit more deployment and maintenance cost. 

It’s hard to believe now, but airbags and seatbelts weren’t always standard equipment in cars. And they weren’t added just because manufacturers voluntarily decided it was a good idea. It happened because regulators recognized that certain technologies, in high-risk contexts, need mandatory safety standards. Independent testing. Rigorous certification before deployment. Post-market surveillance. 

The same logic should apply to AI in healthcare.

Independent quality assessment is a must-have. Before an AI tool is deployed at scale in clinical settings, it should undergo rigorous, independent evaluation, not just self-attestation from vendors. This doesn't mean perfection. Airbags aren't perfect, but they're engineered with known failure modes and safety margins built in.

Continuous monitoring should be standard. Once deployed, AI tools need systematic oversight: tracking outcomes, flagging unexpected performance changes, monitoring for bias or failure patterns. Again, this monitoring should be independent.

Clinicians and organizations require transparency. Every healthcare provider using AI should understand what the system is doing, what its limitations are, and where it tends to fail. Users must also feel empowered to raise concerns and flag issues without being labeled “anti-AI” or dismissed for purely business reasons.

AI itself can scale oversight in novel ways. AI systems can be trained to detect anomalies in other AI systems. In other words, AI can “QA” AI. They can audit AI outputs and agents for care quality, regulatory requirements, scope of practice and more, allowing oversight at a scale impossible with only human auditing. But, again, independence is key.

The airbag path isn't about saying no to AI in healthcare. It's about saying yes, but with safeguards. With independent verification. With ongoing vigilance. It's about recognizing that the most powerful technologies need the most rigorous governance.

And like airbags in cars, we’ll one day look at such safeguards and see them as obvious. We’ll wonder: How could we ever deploy AI in a healthcare setting without them?

Bottom Line: AI Safety Isn’t Anti-AI

AI is already deeply embedded in healthcare. And on the whole, I’m convinced that’s a win-win-win for patients, providers, and healthcare organizations.

But AI isn’t perfect, and there’s no domain as fraught as healthcare. While the potential benefits of AI everywhere are immense, we can’t let our excitement lead us to rush to fit it in everywhere without thinking of the long-term risk. 

Healthcare is 100% not the place for "moving fast and breaking things." Because some things can’t be fixed so easily.

This tech can absolutely make care more effective and more accessible for more people, and that's undeniably a noble goal. But we can't let FOMO undermine safety, especially in healthcare. Let’s make sure “AI everywhere” in healthcare leads us down the airbag path, even if it takes a little more effort.

You May Like

Most popular

Newsletter