Building a Culture of Embracing AI in Healthcare with Keith Morse of Stanford Medicine Children’s Health
This is part of our weekly executive insights series where Elion CEO Bobby Guelich speaks with healthcare leaders about their tech priorities and learnings. For more, become a member and sign up for our email here.
Name: Keith Morse
Role: Clinical Associate Professor of Pediatrics; Medical Director of Clinical Informatics
Organization: Stanford Medicine Children’s Health
Speaking from the perspective of your role at the Children’s Hospital, what are your key tech-related priorities for the rest of 2024 and into 2025?
We are mostly focused on expanding our access and use of the core GenAI tools that we’ve launched in the last year:
A PHI-compliant chatbot
Epic’s in-basket drafting tools
We have had those in place, but we are moving from a pilot phase into a broader implementation, and the various learnings and organizational maturity that come along with that.
I’d be curious to dive into some of your experiences with each of those. Can you start by telling me more about the PHI-compliant chatbot?
Folks in our organization have been clamoring to experiment with GenAI in a way that’s approved from a security and compliance standpoint, so that’s what we’ve rolled out. It’s a general purpose ChatGPT that we have structured to manage PHI. It’s built on OpenAI provided via Microsoft Azure, and branded as “AskDigi.” Internally, it’s managed by our IT group, blessed by our security, and available to our whole organization.
How did you go about setting—or not—guardrails on how people could use it for their work?
We’ve made it broadly available, making it clear that the tool isn’t intended to support medical decision-making and human oversight is always required. The best way to train people is to have them just use it and gain experience. And you can’t do that if you don’t have something to play with. So making it available is the priority.
AskDigi itself is not intended for supporting specific use cases. It’s intended for increasing organizational insight. But we also recognize that there are probably pockets of value in the organization that can be leveraged just by accessing this AskDigi tool. We don’t know what they are yet, but allowing people to use the tool will help us identify them.
Is this primarily scoped just to administrative use cases or can clinicians use it as part of their clinical workflows?
At this point we are very clear in our training and the disclaimers that this tool is not intended for medical decisions. We know it’s not appropriate. OpenAI, Microsoft, everybody says it’s not appropriate. That will be one of the main milestones in the next 10 years, but we’re certainly not there right now.
What about the other priorities you mentioned?
On the Epic in-basket tool, we and CHOP were the first two pediatric organizations to participate in this pilot with Epic. We’ve had it for about eight months now and we have it rolled out to several of our specialties as well as our general pediatrics group.
Our results are generally that the tool is less helpful for providers, more helpful for nurses, MAs, and other front office staff. We’re finding that doctors don’t get as much benefit from it, because by the time an in-basket message ends up in a doctor’s inbox, there’s usually some nuance to it that a straightforward tool can’t process. But for front-office folks, it’s super helpful.
On the physician side, do you feel like you see a path toward the AI being able to handle more?
Yes, to an extent I can see primary care as being a relatively manageable arena. But the problem is:
The model has access to a certain amount of patient information like problem lists, medications, sometimes the most recent encounter note. But I don’t see a future in the next couple of years where we upload enough EHR data to actually get a sense of who this patient is.
The second issue is, many questions have to do with very routine, mundane things like, “Hey, I have to reschedule my appointment. Do you guys have availability next Wednesday with Dr. So-and-so?” There’s no way GenAI knows that.
The third area is specialty care. When we first started the pilot, Epic actually wasn’t making it available for children, but we have an OB practice as well. So our initial pilot group was with our maternal fetal medicine specialists, like in our IVF clinics. It did not do well with IVF medicine. It’s so specialized, so nuanced. So that sort of thing is asking a lot.
What is your organization focusing on as you look to the future of AI?
We don’t think it makes sense to focus as much on the shortcomings of the tool today, as most of these will be resolved in the next several years. Instead, let’s aim our organizational planning towards 2026 when this tool is going to be able to do most of the things that we are generally wishing and hoping it could do now.
For example, how do we set up our organization to handle a tool that is legitimately high functioning in the ways that we want it to be? The big turning point is going to be when we think a tool is capable enough to support medical decision making in a non-trivial fashion. We’re going to get there at some point in the relatively near future.
Our ability to effectively take advantage of that technology is not going to be a technology problem. It’s going to be a people and a process problem. Our organization has to get our governance structure in order so that when the tool can do a thing, we can bless it to be able to do that thing within our walls.
Also, recognizing that our organization—like most organizations—takes a while to change. It’s probably going to take us two years to legitimately set up and iron out our administrative processes. So while we are doing that, let’s aim those processes to be capable of handling a tool that can do more medical things in addition to administrative stuff.
Can you tell me on a practical level what you’re doing around people and processes to make sure your team is ready for the future of GenAI?
Take the case of the OpenAI chatbot. We have to get people up to speed on how you use this thing in general, and how you use this thing specifically for your business processes.
We recognize that everybody is coming to this from a different place. There are people in our organization who are very savvy and have been badgering our group for months about getting access to it for themselves and their teams. There are also folks who have never heard of ChatGPT. They are all part of our organization and part of our responsibility in terms of up-skilling to use this technology appropriately.
The way that we’re tackling that is:
Recognizing that this is a gap for everybody. Nobody has experience with GPT beyond a couple of years. We, along with the school of medicine and the adult hospital, are compiling what we’re calling our AI Academy. It’s essentially a repository of trainings and learnings; for folks who are starting from square one, they can go there. We also are putting together prompting workshops both internally within the IT department and intended for broader audiences.
We’re also seeing the value of local super-users. So for example, we just wrapped up a project where we were using GPT to label safety incident reports for certain attributes. That work was led by one of our quality assurance managers. That’s getting published, and we’ll be moving on from the pilot there, but the takeaway is that this quality assurance manager now is a local expert and evangelist for what GPT can do in her specific area. So having those types of folks sprinkled throughout the organization is going to be key to sort of get the message across.
It’s not just about people who are putting prompts into AskDigi. People on the marketing team have never talked about GenAI as a concept. So when we put out these announcements or educational material, it’s new for them. For our privacy and compliance folks, there are a ton of issues that are bubbling through the organization now around privacy and compliance issues that get at the nuances of what GenAI is and is not. So our teams are getting up to speed in understanding these concepts, even if they aren’t actually the ones who are using the tools themselves.
Anything practically that you’ve learned either from your training efforts or governance and policy work that’s worth sharing?
I think it’s basic stuff like not trying to do too much. Oftentimes, when we try to sit down and think about what our policy for AI is going to be, we immediately get lost in this huge world of “what do we consider to be AI?” Picking a single example and sort of focusing on, “Okay, how would we process this one example?” has been helpful for us to get our ducks in a row in a small area, and then we can sort of build off of that.
We think about AI coming into our organization from three broad buckets:
We build a thing ourselves.
We buy a specific GenAI tool.
A tool that we already have sticks on a piece of AI functionality.
Managing all of those are, arguably, separate processes, and trying to come up with something that can handle all of that on day one is tough. We recognize that there are different dimensions of maturity: technological, training, governance, etc. We know where we are today, know where we would like to be in the future; it’s probably not reasonable to expect that we’re going to get there in the next six months. It’s gonna take a ton of work to get from maturity stage one to maturity stage two, even if we’re shooting for maturity stage five in the next couple of years.