Executive Insights
May 2, 2025

Closing the Digital Divide with Mark Sendak of Duke Institute for Health Innovation and Health AI Partnership

Bobby Guelich's headshot
Bobby Guelich
CEO, Elion
Mark Sendak social.png

This is part of our executive insights series where Elion CEO Bobby Guelich speaks with healthcare leaders about their tech priorities and learnings. For more, become a member and sign up for our email list here.

Role: Population Health & Data Science Lead at Duke Institute for Health Innovation and Co-Founder of the Health AI Partnership

Tell us a bit about your background and the Health AI Partnership (HAIP).
I’ve been at Duke for 15 years—originally as a medical student, then transitioning to innovation work. Over the years, I’ve focused on developing and implementing innovations, including AI, machine learning, care delivery models, telehealth, and more. In 2021, we launched Health AI Partnership (HAIP) with support from the Gordon and Betty Moore Foundation to scale AI implementation capabilities across healthcare systems.

What does HAIP do in practice?
We started by bringing together leading organizations to co-develop best practices for implementing AI—to help work through the technical problems so many of us have internally with developing models, validating models, monitoring models, as well as all the change management, like how we measure ROI and train frontline clinicians. It started with the goal of just creating great content, but we quickly realized that wasn’t enough. So we launched a technical assistance program that we call the HAIP Practice Network. Organizations apply to the program with a prioritized use case, and if they're accepted, we work closely with them to navigate the AI product lifecycle. We meet with each site multiple times a month to help evaluate products, negotiate with vendors, design pilots, and more.

What kinds of organizations are you working with?
We focus on delivery organizations, and we very intentionally try to work across the board including academic medical centers, county hospitals that are AI leaders in their own right, and vertically integrated systems. Then, through the HAIP Practice Network we support FQHCs and community hospitals. 

Within our partner organizations we focus on working with a highly interdisciplinary team. Healthcare is historically very siloed. AI specifically demands interdisciplinary collaboration in a way that defies some of those professional boundaries.

You’re in a unique position to see where organizations stand with AI adoption and what they’re struggling with most. What are the most common stumbling blocks you see?
The biggest gaps show up in implementation and interpretation of AI solution information. For example, take a widely known use case like sepsis modeling. At high-resource institutions, there’s a struggle with implementation and change management. At low-resource ones, they often lack the expertise to interpret what they're looking at. We shared a vendor disclosure form with our practice network sites, and one FQHC told us, “If I ask a vendor to fill this out, and they give me answers, I don't even know what the answers mean.” That’s the scale of the digital divide.

What you’re offering with HAIP is incredibly high-touch. How do you think about scale?
That’s the hardest part. Supporting five sites deeply is incredibly meaningful—but there are 1,600 FQHCs and 6,000 hospitals in the U.S. I’d confidently guess 90% are flying blind on AI procurement and governance. 

So how do we get started with improving this?
There are analogies; we’ve done this before. There was a billion dollars set aside for regional extension centers that supported community and rural organizations to implement EHRs. There was one regional extension center in every state. We've also done it with telemedicine. There's HRSA-funded telehealth centers of excellence. There's telehealth resource centers in many states. We have to start implementing hub and spoke models, where we centralize some shared services that can be diffused to support organizations that can't take this on internally. In the current political environment, I think things are going to end up at the state level, where states will have to face this.

In your opinion, are you seeing real evaluation of AI happening, or is it more about CYA and optics?
It depends on the use case. As a developer of classification models that are predicting whether or not someone has some condition, I feel very strongly about the need to validate, monitor over time, and locally evaluate in every new implementation. This is what we do with the vendors that we license models to. 

On the flipside with something like AI scribes—my wife is a pediatrician, and she loves hers—there might not be a formal evaluation, but it reduces her pajama time and she’s happier. That counts. I'm not some purist that thinks you have to have multi-site validation and FDA clearance for every technology. You have to be pragmatic and realize the constraints that organizations are facing. 

Are there any other key challenges you commonly see when it comes to AI evaluation?
We have to make it easier to compare multiple technologies in the same category. I think it's very poor form that the most widely adopted AI solutions oftentimes are not the best in class. So how do we facilitate diffusion and adoption of best in class solutions, while also understanding that best in class may be different in different sites? For example, I don't think there's a single best sepsis model for everyone to adopt.