Building an AI Governance Strategy with Mila Orlovsky of Atlantic Health System
This is part of our executive insights series where Elion CEO Bobby Guelich speaks with healthcare leaders about their tech priorities and learnings. For more, become a member and sign up for our email list.
Role: Digital Outcomes and Emerging Technologies Manager
Organization: Atlantic Health System
Can you share a bit about your role at Atlantic Health System?
My focus at Atlantic is on creating the frameworks, methodologies, and governance needed to manage our AI portfolio. In other words, I am developing how we evaluate emerging technologies, monitor performance, and measure the impact of AI across the organization.
How would you describe the current state of AI governance at Atlantic and the approach you’re taking?
Like many health systems, Atlantic has historically explored AI solutions with a strong innovation mindset, often driven by promising vendor capabilities. As we mature in this space, we’re working to evolve from a technology-first to a problem-first mindset, starting with clearly defined clinical or operational challenges, and then identifying solutions that best address those needs.
My role is to help build a more structured and strategic approach to AI adoption, as well as to help surface the impact that AI capabilities contribute to the organization. That includes defining evaluation criteria, establishing success metrics upfront, and creating a centralized governance framework to ensure safe and measurable implementation. We’re also focused on identifying risks early and embedding monitoring practices to ensure long-term performance and alignment with organizational goals.
What have been your initial focus areas?
We began by formalizing a standardized intake and evaluation process for AI vendors, with a focus on real-world performance, risk assessment, and implementation readiness. In parallel, I’ve been mapping our existing AI capabilities to create a system-wide view of current use cases and ensure alignment with strategic priorities.
As we continue to evolve our governance approach, we’re enhancing the evaluation framework to include areas like monitoring, auditing, model operations, and ROI measurement, so that we can ensure our AI implementations are delivering measurable, sustainable impact.
What challenges have you run into when evaluating vendor technologies?
Evaluating AI tools requires both a technical and operational lens, and one consistent challenge is achieving the level of transparency needed to make informed decisions, particularly with solutions that are implementing foundation models. Vendors often present information in a high-level or sales-oriented manner, which can make it difficult to assess critical details such as how a model was trained or customized for a specific use case or cohort.
We approach evaluations differently depending on whether the algorithm was built in-house or incorporates third-party or foundation models. For in-house models, we want to understand how the model was trained, how representative it is of our patient/user population, and how the model was validated. For third-party integrations, we focus on how the model is monitored over time, how accuracy is maintained, and what governance practices the vendor follows.
We encourage vendors to view transparency as a long-term investment and help us set the right expectations with our business or clinical stakeholders. Not disclosing risks may help close a deal, but if the solution is not creating a seamless user experience, underperforms, or breaks trust with users, it won’t be adopted by them and eventually will be dropped.
On our side, we also work with the clinical and operational stakeholders to help them set a more realistic state of mind in terms of what AI can and cannot do, and how we can measure the product impact.
What advice would you give to others building an AI governance strategy in a health system?
Start by understanding what are the main clinical, operational, business problems your organization is facing, not the technological solution. It’s essential to assess your current baseline, map out existing workflows and users, and identify where the real bottlenecks are. Then, assess what alternatives exist to improve the current state, and which of the solutions fits your problem best. Technology today is in a phase where if you know what you want to solve, there is technology to solve it. Operational readiness, monitoring and measurement capabilities, and long-term usability and workflow integration are the bigger hurdles. You need to know what you're solving for, how to measure success, and how to build-in trust and acceptance from end users. These processes take time and require leadership commitment, stakeholder engagement, proper infrastructure investments, and a focus on process rigor.
Any final thoughts for health system leaders?
AI literacy is essential for everyone involved. Clinical, operational, and IT teams need a clear understanding of what AI can and can’t do. AI used in the industry today is based on statistical models at its core, that learn patterns from data. They’re not deterministic systems, which means they can make mistakes or hallucinate. In a clinical setting, those mistakes can have serious consequences.
As much as we’re motivated to incorporate AI into daily work, and while the potential is significant, it’s critical to be cautious about when and how it’s applied. Leaders need to ensure teams are not only equipped to use AI effectively, but also understand its limitations and risks, and are informed how to apply critical thinking while evaluating and working with AI tools. AI education, hands-on training, simulations of AI-enhanced workflows, retrospective analyses and a realistic mindset are key to ensuring it’s used responsibly and effectively across the organization.