[Skip to Content]
[Skip to Content Landing]

AI and Clinical Practice—Improving Health Care Quality and Equity

In this Q&A, JAMA Editor in Chief Kirsten Bibbins-Domingo, PhD, MD, MAS, interviews Kedar S. Mate, MD, an internal medicine physician, President and Chief Executive Officer at the Institute for Healthcare Improvement, and faculty at Weill Cornell Medical College, to discuss AI’s role in health care quality and approaches to improving health equity.

JN Learning™ is the home for CME and MOC from the JAMA Network. Search by specialty or US state and earn AMA PRA Category 1 Credit(s)™ from articles, audio, Clinical Challenges and more. Learn more about CME/MOC


[This transcript is auto-generated and unedited.]

- Advances in modern healthcare are often stymied by the inability to translate evidence-based care to improvements in health for all patients. Artificial intelligence could play a role in alleviating these current challenges in optimizing clinical care, but only if stewarded appropriately. And that's the tricky part. Like other technological innovations before it, artificial intelligence offers extraordinary promise for medicine, and the technology itself will introduce new challenges. I'm Dr. Kirsten Bibbins-Domingo, and I'm the editor-in-chief of JAMA and the JAMA Network. This conversation is part of a series of videos and podcasts hosted by JAMA that explore the issues surrounding the rapidly evolving world of artificial intelligence in medicine. I'm joined today by Dr. Kedar Mate Dr. Mate is the president and CEO of the Institute for Healthcare Improvement or IHI. He is also a general internist and faculty member at Weill Cornell School of Medicine. Dr. Mate's scholarly work has focused on healthcare quality, strategies for achieving large scale change, and approaches to improving health equity and value. Dr. Mate, thank you for joining me today.

- It's a pleasure to be here. Thank you for having me.

- You and I know each other and I hope we can do this on first name, if that's okay with you.

- Oh, absolutely, please. Thank you so much.

- Wonderful. Wonderful. Well, I'm really thrilled that you're joining me here today. So tell us what is IHI.

- The Institute for Healthcare Improvement, the organization that I have the privilege and honor of leading, has been around for 30 years. It's an institution that has focused on improving quality and patient safety and experience and value, and increasingly health equity for populations all over the world. We started here in the US 30 plus years ago. It was founded by Dr. Donald Berwick, who many of you may know in your audience, the former administrator for the Centers for Medicare & Medicaid. When Don started IHI, he was very focused on healthcare quality in the United States. But very quickly, IHI became an international organization starting to work in the National Health Service in the UK. And now IHI works in over 20 countries around the globe, working to try to improve the quality of the care that we receive at the point of care and building the systems and structures to help us get there.

- Wonderful. So when I think of IHI, I think of the triple aim. We need to focus about the experience of care, improving the health of populations, and doing that at a lower cost per person. But that triple aim is sort of, you've written about expanding that out, and I think most recently wrote in JAMA about the quintuple aim. So tell us about the framework for thinking about quality improvement that IHI approaches, and particularly your push towards equity.

- Yeah, absolutely. So, yes, the triple aim, when we originally thought about it back in 2007, 2008, we were looking at what was broadly thought of as issues in American healthcare that were in tension. The idea that creating better outcomes or better quality often came at greater expense or greater cost, and might also compromise access to care. In fact, that was how I think American healthcare conventionally thought of it. This idea that these things were in opposition, the quality, cost, access were in competition with each other. And the real revelation was to say, no, these things are not necessarily competing with one another, but in fact, we can make care better, safer, the care experience better, and we can do so at lower cost. In fact, the general thesis was better quality would result in lower cost of care, not the opposite. So that was the original contribution of the triple aim. Very quickly, people wanted to add additional aims. The first one that got added was the idea of workforce joy, wellbeing, meaning, purpose, safety of the workforce. This whole concept or set of concepts around making sure that the people who give care, including the distributed caregiver experience, was also being thought of and considered as we thought about better care outcomes, better care experience, and lower cost. The idea that we could get to those big picture goals around quality, safety, effectiveness, and better outcome, if we did so with the idea and the concept of pursuing that through an equitable path, that would be more sustainable, that would be more durable, and that would be in fact much more achievable. So those are the reasons why we added equity as a fifth dimension to the quadruple aim. And now talk about this as actually the quintuple aim or the five part aim for achieving better health system performance.

- Well, I like it. I like thinking about that you reminding us that the original intention was that these goals are not in conflict with one another. And I think often that people often say, well, we can do this, but the equity is a harder piece to add to it. And I can see the rationale for incorporating it into the overall aim structure within that rubric of, well, they're not in conflict, in fact, you need them all.

- And the other thing is that if we don't put equity right into the design of our health systems to begin with, we leave it as an afterthought, we leave it as a sidebar or a side consideration. And when we turn our attention to AI, I think that's gonna be a foundational question for us to think about as we start to see these new technologies enter our ecosystem, will we pay attention to the equity considerations right from the get go or are we gonna try to add them after? And I can promise you that if we try to do it later on, it's gonna be a lot harder than if we just start with the premise that in fact, we can actually build these systems to be more equitable to begin with.

- Yeah. Let's turn to AI a little bit. So one of the things in these conversations that I've been having that has been most compelling is people who are talking about AI as the ability to really scale, that if we do it well, we actually are talking about increasing access, we're talking about reducing variability, and the optimistic point of view is that it could actually scale. As somebody who's thought about scaling improvements, how do you think about this technology, which is clearly going to shape what we do in clinical practice through your lens of improving health systems?

- Well, I think there's almost no question. There are lots of people that are not getting the kind of quality care that they could be otherwise receiving. And that AI-enabled solutions or AI-enabled technologies might allow us to actually reach a number of places that we haven't been able to reach fundamentally before. So think about rural, remote, underserved communities, not only in the United States, but around the world in middle and low income countries. AI-enabled image reading or pathologists could do a lot of good. I think right now already with what we have present in those kinds of contexts, anything that's highly dependent on imagery or repeatable large data sets can or maybe ultimately done better and more efficiently and in a more timely fashion by AI. There's also the notion of timeliness of care, the speed with which these technologies are operating, you can experience that for yourself by using ChatGPT or one of the other chatbots that are out there, typing in a series of questions, and you can see within seconds, you're getting volumes of information back. Imagine that in a clinical environment that hasn't had access to a specialist, being able to ask tools like that for knowledge and information, just as a starting point to aid clinical decision making will, I think, really help us in the coming years to be able to begin the process of distributing the kinds of knowledge and expertise that we're looking for. And IHI, I'd say, has a leadership alliance. And we talk about this idea, one of our radical redesign principles in leadership alliance is this idea of moving knowledge, not people. And I think AI allows us to move knowledge at massive scale. And it reduces the potential waste of moving people or infrastructure around that would be far more cumbersome and far more costly to do. So this idea of moving knowledge, not people, I think is put on steroids with AI technologies and tools.

- So it sounds like you're optimistic. Tell me how you see that path forward, especially as somebody who's really championed the principles of quality and equity.

- Look, I think there's lots of opportunities around AI and making care safer, better, higher quality. And of course, there are some risks that we're gonna have to manage and mitigate. So one way of thinking about this is to consider the dimensions of quality. The IOM, now, The National Academies, have defined quality with those six dimensions, which are probably well understood by the audience, but its safety, timeliness, efficiency, effectiveness, equity indeed, and patient-centeredness. And so we could go through each of those and talk about how AI is gonna potentially have an effect on the quality of care that we might imagine receiving. For something like safety, I think the jury is very much still out. I mean, this is still pretty early days. Even though there's a lot in the news and a lot of the literature that's coming out very rapidly around AI, but there's still a lot of speculation and uncertainty around how AI is going to, whether it will or not improve, for example, patient safety. In JAMA, there was a article by a colleague and friend, Hardeep Singh and Prathit Kulkarni, talking actually about the possibility of AI helping us with diagnostic challenges. Diagnostic error, diagnostic failure or diagnostic delay is a massive problem in healthcare. I don't even know that we fully appreciated its scope. IHI has a sort of safety think tank within our organization called Lucian Leape Institute, named after Lucian Leape, one of the founding fathers of patient safety in the United States. And that institute has actually decided to tackle this problem of trying to understand what the risks and opportunities might be of AI with regards to patient safety. On the opportunity side, I think people think of better handoffs, better communication, information not falling through the cracks, fewer missed details, better differential diagnoses. Maybe for the first time, the opportunity to fundamentally eliminate drug-drug interaction problems or adverse drug events around drug-drug interactions. There's some really interesting work that's been going on for a while now on sepsis, still a primary cause of death in inside of hospitals. But work that Suchi Saria and Bayesian have done on early warning scores for sepsis, pretty compelling information in prospective studies. A large study, 600,000 patients, they saw reductions in mortality from sepsis by using an algorithm to help predict sepsis and anticipate it earlier, having that verified by clinicians. So the AI is not operating on its own, it's still working with clinicians to help power their work, but reductions in mortality, length of stay, other aspects. So I think on the opportunity side, there's a lot of promise in AI. On the risk side, there's lots there too. So I think probably one of the biggest risks is complacency. We may find that as the AI gets better and better, on the one hand we wanna trust it more, on the other hand, there's some risk of us, in some ways, losing the clinical acumen and skills as we increasingly trust the AI more and more. And then of course there's risks of the AI getting it wrong, which at this time, it gets it wrong relatively often in common circumstances. And then the final one that I would say, and probably the biggest one is the possibility of introducing bias. Especially when it comes into this question around equity, I think the, the possibility of bias is enormous and the training sets, the way that we build these models and how we train them, if they're built off of existing ways in which we work, existing ways in which our societies and our medical systems are structured, there's a great risk of it introducing or perpetuating the biases that we've been experiencing as a system for generations now, for, in some ways, hundreds of years, that we have to actually deliberately design out of the AI. And that's, I think, a really important part of how we're gonna succeed if we're gonna actually build Ais that are mindful of health equity in the future.

- I know when I've talked with radiologists who've been using systems that are enabled by artificial intelligence, they also describe a little bit of, in some areas sort of, they know better, they're gonna bypass this. How is this technology, from your vantage point, substantially different in a way that those common ways that clinicians get used to ignoring or avoiding certain technologies that are designed to help us do our jobs better, is there some reason to believe we're in a different era now, or is this just more of the same?

- Well, I think it is, This is a both end answer to this question. I think it is a little bit more of the same, but there is something actually, I think, quite different about this technology. So you're right. We've had what might be described as analog algorithms for as long as I can remember. They've actually made care less safe for specific populations over time. Even in the digital era, let's just say pre-generative AI, large language model AI. In the digital era, we've also had algorithms that have been demonstrably biased. There's the paper by Ziad Obermeyer about how digital algorithms would under, usage of those digital algorithms on hundreds of millions of patients systematically underestimated risk among complex and black patients. Because the algorithms were trained on cost data and because of less utilization of health services by those populations, they underestimated the risk of the challenge on safety and complexity in those patients. The difference I think, between what we're seeing with generative AI, and large language models is in the way in which it expresses itself. So many of our CDS tools are explicitly designed to say, pop up in a window or provide an alert and say, hey, pay attention to me. There's a potential risk of something bad happening here. The way gen AI talks to you, it is qualitatively different. It speaks to you with authority, it reaches a conclusion that's a bit more definitive, the confidence with which it asserts its view, whatever that view may be, it feels different and it also feels enormously useful because of that. And so you see this sort of bottom up adoption of generative AI tools, the chatbots and ChatGPT and other things, in part because it feels immediately useful because it's solving our immediate problem. I think it's probably best to recognize that the AI tools that we have right now, at least for now, they are adjunct tools. At best, today, they're capable of perhaps supporting human clinical reasoning and shared decision-making process that occur between patients and clinicians, but they're not a replacement for human clinical reasoning and judgment, at least at this time. So I think that's the differentiator. It's this qualitative way in which AI positions itself. But again, I think there's a challenge around this. We're gonna see as the training models get deeper and richer and smarter as more training data is entered into it as the AI gets more and more sophisticated and the error rate declines to fractions of a percentage, our confidence could and probably should grow in the AI tools that are coming into our clinical practice environments.

- Yeah. So what I hear you saying is both reminding us that we're in a rapidly evolving time where the improvements are there. I would say the authoritative voice used by many of these tools is both what makes them useful cognitively in the way clinicians think, but also I think is the risk right now as they are not quite as accurate as we might, or designed really for all of the use cases where we've tried to apply them, right?

- That we're getting excited about. And that actually holds, Therein lies me one of the many potential ways of mitigating the risk, which is the idea of putting a clear signal on the AI conclusion that says, this is an AI conclusion by the way, it was created in this way using this set of transparently understanding the inputs into it in a way that we haven't had for. For example, the VBAC algorithm or the GFR algorithm. It's not that that wasn't available to us, it's in the literature, you can dig through it and find it, but it was hard to find, and for the most part, we forgot that history candidly, unfortunately. But we can be a lot better about this in the future. We can say, this is the algorithm, this is how it's composed, these are the elements in it, this is the training data that was utilized. With that level of transparency, the relative value of the AI tool becomes more apparent as it affects our clinical decisioning. And then we leave it to the human clinician and the patient, frankly, both parties in the therapeutic dyad to navigate how useful that recommendation or suggestion or idea set from the AI is. And that's I think how we start to move forward with AI, calibrating towards a future in which AI is a part of the encounter, but not making all the decisions on our behalf.

- So the National Academies convened a group, of which you are a part, to develop an AI code of conduct.

- It's not the only code of conduct or set of principles that is being written. So I don't wanna overstate what it's gonna end up being. I do think it's gonna try to establish a set or articulate a set of principles, guidelines, maybe some guardrails, depending on where we end up in our deliberations. The group is keenly aware of how fast moving the field of AI really is. So anything we put out is out of date within weeks

- Yeah.

- of what we put out. So I think there's this desire to move what we know out into the world sooner than later. AI is not yet in a position to, in any way, replace human clinical judgment or shared decision-making. But this idea was also referenced in our conversations that clinicians who use AI and systems that use AI tools will likely have a decisive advantage over those that don't. And I think those two things can live in tension. We're not ready to replace clinical reasoning, but as an adjunct or support, almost surely it will present those that can use AI successfully with advantages. And then the other thing I will just mention was how patients who use AI, there's actually significant benefits too.

- I've heard that shorthand that AI is not going to replace a doctor, but a doctor using AI is going to likely replace doctors not using AI. But I do think it's an important point that you're raising that with a technology that allows scale and also accessibility, that this means the accessibility of knowledge for patients as well. And I think it's at that patient interface,

- That's right.

- it'll also really, I think, radically change what happens in the clinical encounter. And as with most things, if we can shape it, it will be ideally for the betterment of the health of the patients and the efficiency in a clinical encounter, but all of this is still a big work in progress. But I wanted to know what you are thinking about as the most important thing in the next year. What are the things that are sort of highest on your list or you're most looking forward to?

- One is that I'm really eager to see clinical application, more training of AI models on, or algorithms on clinically relevant information. I know that there are efforts underway with Mayo Clinic and Google and others and Microsoft to try to build clinically relevant AI tools. I'd like to see more of that. And then I'd like to see us actually putting those into practice and to try to actually solve real clinical challenges. We have the capability to make sepsis or at least morbidity from sepsis and mortality from sepsis a relatively rare event with the applications of these tools. That would be exciting to see proven in the literature and through demonstration efforts. That's the kind of thing, real application I'd like to start seeing in what we're actually looking at in clinical practice. The other comment, this is not a short term thing, but more of a longer term, perhaps a statement, about choices here, and going back to the bias concern and the equity considerations here. Just this notion that AI as a phenomenon, I think we think of it as sort of this moment. And it may very well be that when the history is written of this time, we'll be thinking of this like the birth of the internet or something, like this might be that moment, I'm not sure. But here's the thing, AI can't change history. It actually learns or studies history, and from that creates new conclusions. But because it learns from history, it depends on choices about what we tell it is history. And this gets back to the bias and equity question because we have choices about what we believe is a true history, a real history. And what we feed the AI essentially is training information, is a deeply political or social choice. And that's what it really is, it's a choice. And furthermore, I'm curious about whether history alone is really sufficient to describe our future. 'cause an early stage AI

- Interesting. that was analyzing whether, for example, women can be president, is trained on historical data, which suggests it's not possible, which is obviously not true for our future. And so I think this notion that AI, that it's neutral or some sort of tool that is apolitical is not true. It depends on choices we make about what we tell it to learn from, and that is intrinsically a political or social or cultural choice. And as a result, we have the opportunity to, in some ways, help AI create for us a better future and or allow AI to perpetuate the challenges, problems, inequities and structural failures of our past and present. So I think that's not a short term consideration, but is perhaps more of a philosophical consideration around what I hope AI will be thoughtful around and how we will help to build the AI tools of our future.

- You're reminding me of a piece that I saw about prompts for images in global health around a doctor and on the global stage, these images that it generated, even when the prompts were specifically designed with a more equitable framing in mind continued to have a white doctor in an African village and that was the image on a global health stage of a doctor doing something,

- That's right.

- interacting with. So to your point of recapitulating what that image of history is.

- That's right. If we train it on the things that we've always, If we do what we've always done, we're gonna get what we've always gotten, on some level. So we've gotta find a way. And this is not easy to ensure training data our are free of bias, doing the hard work of sampling the training data so that we understand what biases might be present and then to eliminate it, maybe oversampling specific communities that are disproportionately affected by a particular issue or a condition, in the case of healthcare. And also conducting the kind of post-surveillance, Sorry, post-market surveillance of algorithms. So when we deploy something,

- Yeah.

- when an algorithm becomes adopted, we should verify that it's producing the kind of equitable improvements that we hope it might. And if it doesn't, retooling the algorithm to make it so that it does. And not being afraid to do that, I think, as a community of scientists and scholars.

- Yeah, I think that's right. And certainly, I know for us at JAMA, we're interested in this, but we are trying to apply in many ways the same lens that we usually do to new advances. We wanna see the science, to see it in clinical settings, to see the outcomes and then to interrogate it across, you know, across the things that the principles that are important, and in this case, equity as we're talking about now.

- Yeah. I was really excited to see the call for papers that JAMA issued earlier. I appreciate this balance of sensitivity to speed as the field evolves so fast, but also a focus on sound science. I think that's just a tension we're all gonna have to live with and hold, wanting to make sure that we actually produce good science, but also respecting that the field is just advancing at breakneck speed at the moment.

- Yeah, that's right. And I think a part of that tension is also getting people together across disciplines. So much of this is moving fast in another area and then bringing it into, well, what are the applications in healthcare? And I think that's what's exciting. Well, I'm happy you're at the table, I am happy that you're gonna be part of the conversations.

- Thank you.

- I hope when the code of conduct is released, we can have another conversation and more of us in healthcare have to, even if it's not technology, we fully understand, embrace the discussions about what would it take to really have this type of disruptive technology achieve the goals that we all have. And so it's an exciting time.

- It sure is. Thank you.

- Thank you for watching and listening. To our audience, we welcome your comments. And remember to submit to JAMA's AI and medicine call for papers. Until next time, stay informed and stay inspired. For more multimedia content like this, subscribe to the JAMA Network YouTube channel and follow JAMA Network Podcasts available wherever you get your podcasts.


Name Your Search

Save Search

Lookup An Activity


My Saved Searches

You currently have no searches saved.


My Saved Courses

You currently have no courses saved.