[Skip to Content]
[Skip to Content Landing]

AI and Clinical Practice—Building Patient and Clinician Trust in a Health Care System

In this Q&A, JAMA Editor in Chief Kirsten Bibbins-Domingo, PhD, MD, MAS, discusses AI implementation and the importance of building trust with Andrew Bindman, MD, an internist and the executive vice president and chief medical officer for Kaiser Permanente.

JN Learning™ is the home for CME and MOC from the JAMA Network. Search by specialty or US state and earn AMA PRA Category 1 Credit(s)™ from articles, audio, Clinical Challenges and more. Learn more about CME/MOC


[This transcript is auto-generated and unedited.]

- Clinicians are learning more and more about the ways in which artificial intelligence technologies may improve clinical practice. We've heard about the possibility for AI to enhance disease detection, patient experience, and treatment outcomes. There may even be opportunities to reduce clinician burden and streamline processes for healthcare delivery. We also know that there are deepening concerns about areas that may indeed worsen with these technologies such as equity and patient privacy. I'm Dr. Kirsten Bibbins-Domingo, Editor-in-Chief of JAMA and the JAMA Network. This conversation is part of a series of videos and podcasts hosted by JAMA in which we explore the issues surrounding the rapidly evolving intersection of AI and medicine. Today, Dr. Andrew Bindman and I discuss the ways in which AI is already changing clinician experience and healthcare outcomes. As the Executive Vice President and Chief Medical Officer for Kaiser Permanente, as well as a general internist, Dr. Bindman has a unique perspective on the challenges and opportunities of AI within a large healthcare organization. Dr. Bindman, thank you for joining me here today.

- Oh, it's a real pleasure to be with you, Dr. Bibbins-Domingo.

- You and I know each other and I hope you don't mind if we'll be on first name basis during this interview.

- It'd be a complete pleasure here. It's been great to see you.

- Nice to see you as well. All right, so we know each other from UCSF where you are a clinician, had a long illustrious career as a health services researcher, and now you're the chief medical officer for Kaiser Permanente, a large healthcare organization. I think about with Kaiser is using the data to continue to improve care for individuals as well as for the population. And of course we're here today to discuss AI technologies and all of a sudden we have the ability to think about the use of data at scales we've not really imagined before, but you all have probably been thinking about this for a while.

- One of the really powerful examples is how we use information collected in our hospital settings where we will integrate information on vital signs, neurologic checks, updates to lab results, and to use that in an integrated way in a tool to start to predict which patients are most likely to be headed toward a decline that could result them having to transfer to an ICU setting. And the AI tool is able to do this in a rapid fashion and to send an alert to the primary physician involved with that patient to say, hey, there's a pattern emerging here that you may not have seen on your own. You might now want to go back and look into that. We implement these tools in a learning context so that we're then studying, gee, compared to not using the tool, how do we do? And what we have found is that we are in fact better with these tools at predicting who is going to head to a decline and might head to the ICU and have a worse outcome. Another example would be our ability to better identify patients coming to an emergency room to figure out based on natural language notes that are written when that patient perhaps had visited an urgent care or saw their primary care physician initially expressing some current concerns about a symptom or some signs. And we've been able to use AI tools to capture that information and to help the emergency physician better predict who might be in fact at risk, higher risk of having sepsis so that they're identified earlier. And then finally one other really interesting example is thinking about social risk factors. And so we've been using AI tools to look at things like missed appointments, missed picking up medications that were prescribed, and saying, huh, I wonder if patients who are showing some of these behaviors are actually signaling to us other challenges that they're having with say, transportation, or other things going on in their complex lives. And so that the AI tool identifies these kinds of gaps, if you will, or people who might be slipping through our system and again, alerts the primary physician to say, this might be a good patient to reach out to, identify if they're having certain challenges related to social risk factors, and then making an intervention. So these are ways that we're trying to use and develop these tools and learn about them to improve the health of our patients.

- Yeah, those are such great examples and you've written for JAMA I believe last year about the challenges around diagnosis for sepsis and diagnostic excellence and the particular challenge for sepsis. And so it's a really great example, but let's just talk a little bit more about the nitty gritty. So you and I are both general internists and we know these tools oftentimes that are supposed to make our lives easier are the things that we like to click and avoid and say, okay, stop telling me, I know what I'm doing. And so how do you know these are really working and what guards against that thing that's supposed to make all of our lives easier as doctors really making it harder when we start to ignore them?

- Yeah, that's a great point. And we do have to be sensitive to that kind of alarm fatigue. And so I do think it's critical that a part of our evaluation of these tools really does take into consideration both what they do to improve our patient's health, but also the experience of our clinicians in terms of using them. We really still think it's important to pair the use of these predictive tools with a clinician who is still actively involved in providing oversight. We haven't turned over responsibility of care to machines or AI decision making. It's all done partnered with our clinicians. I think these tools can both enhance predictive powers that I think our clinicians do recognize are greater than what they can sometimes see on their own. When we're predicting who's likely to go into the ICU from our hospital wards, I think our physicians recognize that this is really a great adjunct, but in the primary care practice, I think there's going to be also this important trade off that's gonna happen where these tools can start to not only add predictive power, but take away work from some of our clinicians. We are experimenting, for example, with the use of these tools to essentially be a quiet listener in the room and to be able to generate notes related to the interaction between a primary care physician and a patient. And to create an excellent first draft for our clinicians to then review and to more quickly be able to do the documentation component of their work rather than sitting at a keyboard and glancing occasionally at the patient, as you know sometimes happens in primary care, where you're trying to document in real time. This allows for much more full attention that the patient deserves and we want to be able to provide. And because it can, in the background, be creating this note at real efficiencies.

- Right, so ideally we want our clinicians doing the things that they are best suited to do. And what I hear people who are excited about the promise of these technologies is making sure those other things that really add to clinician burden, that if a technology like this can do that more efficiently, then that's a win. Let's talk though a little bit, I know equity has been an important issue for you, and with new technologies there are several ways in which equity people have raised concerns. There's been very nice work from others showing that when you learn from patterns of care that are not equitable, you actually end up reproducing that and potentially even amplifying that, right, because of the ability of AI to scale. How do you guard against that?

- This is such an important issue, Kirsten, and we all need to really hone in on it and learn together how to do it. So step number one I think is making sure that we do have, in the development of these tools, broad representation. We are of course very sensitive and want to respect any patients, any members that say, please don't include my data. Many patients are actually extremely interested in having the ability for what has gone on with their care to be part of informing not only their future care, but helping others. So there's tremendous altruism among our members for that. So I think it's important though, to create that trust, to make sure that all members feel like, well, including me, including my members, is going to in fact be helpful to me. It's gonna be helpful to others. And so I think we have to make sure we have good communication loop back to our members, anyone who's giving us this data, to help them see how their information was helpful in this way, I think, Make them feel they're like a participant in the process and be really mindful of getting a very broad cross section of membership. And so this is a critical part of what I think the first step. The second thing is what you've said, which is if there has been bias in the care that's been provided, we need to identify that and to look at how these tools could potentially exacerbate or lead to further examples of that. But I actually think that what you're putting your finger on is something where we all as a community have a lot to learn here to identify our unconscious bias in ways that it has trickled into the healthcare delivery system and develop what I would think is really good ways of testing these models to look at those kinds of potential issues that may arise, that may be leading us to bad decision making because it was based on bad data from the past and so forth. And I'm a part of a group with the National Academy of Medicine where we're trying to develop an AI code of conduct and this issue has surfaced right away as something that we want to be mindful of. Not only what are these tools trained on, what's the characteristics of the individuals, but also how do we do the surveillance post-implementation to make sure that we are not introducing these problems. Because you're right, you've called out, we recognize it, this is critical. But I think there is a lot for us to learn about how to get better at examining our own data, both on the front end as we adopt these tools, but also once we implement what would be signs to us that things are going on a track that we don't want them to be going. We really are trying to drive toward equitable outcomes and by focusing on those outcomes, can hopefully eliminate the bias you're talking about.

- Yeah, I think it's really great to see the types of different disciplines, different sectors represented in this National Academy's committee. And really that's going to be really important work. Another thing that strikes me just related to equity is that we have talked a lot about population health. And population health, of course, has to appreciate the many social factors that really influence health in general, influence processes of healthcare delivery, and those types of things.

- We have launched a very concerted effort to understand the social risk factors for our members. Because we know how incredibly important these are in terms of influencing health outcomes. That's important to make sure we're not being biased in our treatment, but that we're also focusing on our outcomes to make sure we are maximizing that. So we're systematically collecting information about where people live, and what transportation challenges they have, challenges they have around food security, housing stability. And we do this by also communicating with our members about why we're doing it because we're actually not just collecting it and hope to do something from a research perspective with it, but in fact because we want to take an action step. We have electronically connected and created a network of community-based organizations relevant to each of our geographic areas so that if a clinician identifies that they have a patient who has a need in one of these social areas, they're able to essentially do the equivalent of writing a prescription to connect that member with a local resource in their community that is designed to help them to address that resource, and that organization is electronically connected with us and communicate with us. And we are particularly honing in on some of our members, for example, while going through very complex part of care, like cancer care or those who are demonstrating to us, as I mentioned earlier, they're having trouble making appointments or picking up prescriptions. So using certain clues as a way to collect that information and to act responsibly by providing help.

- Whenever we have a new technology like this that we know is going to be disruptive, there will be lots of good coming from it, but there are other issues that we have to really protect. What do you think the role is for government, either local, state or federal government, to protect the interests of patients, or clinicians, whoever it might be. Where do you think the role is here for government?

- Yeah, this is a great question, and a really important one, that we need to both recognize the incredible promise and possibility of these tools. I mean, Kirsten, what we're seeing with AI and machine learning is both incredibly powerful and exciting as potential tools. But you and I have enough experience to know that almost any intervention innovation comes with risks of untoward effects. And so we need to make sure we're building a safety mindset and an evaluative mindset into anything that we roll out. Now, I think what's tricky is to think about, AI encompasses a wide range of things. And so as you are well aware, the FDA has already identified a certain role related to certain kinds of specified software that is very specific in certain areas related to using machine learning. I think where it's more tricky is in the generative AI area, which has generated so much interest in the last few months since many people are starting to explore that. And what's hard there, as I understand it, and I'm not the technical expert here, but to the extent I've seen it is that it doesn't necessarily perform the same way each time. And I think this creates a little bit of a regulatory challenge, if you will, about what in fact is the entity that is being regulated here. And this has already come up in our conversations through this NAMM AI code of conduct work, is is there in fact an entity that already exists that has the right kinds of multidisciplinary mindset to be able to do this within government? And I think it seems like this is a broader and more complex task than we've historically given FDA alone to do. FDA may be good for narrow prescribed uses of AI in which it's analytically bringing together certain data elements as I described earlier, to predict a certain outcome like, is a hospitalized patient declining and therefore headed toward the ICU. That tool sort of performs the same way over and over again and the FDA may be in a position to be able to regulate that kind of tool. Whereas generative AI, which it functions more in a different way in terms of repeated use may be harder to regulate that way. But I think fundamentally, your question raises a key point, which is we don't want to introduce a tool that will create risk to people's sense of privacy, introduces errors that humans are not able to oversee in some way. So we need that in place. The final thing I'll say, which I think has been fascinating from our conversations in this NAM group thus far is that it's pointed out that the counterfactual, if you will, that is care as we deliver it today, is fraught with all sorts of challenges of in fact taking the best evidence and turning it into the best care. That lots of decisions in fact wander away from what in fact might be the best actual advice to give our patients. And so the advantage of these tools in terms of closing the gap between evidence and practice is really powerful. I guess what I would say is our group has said, let's not make perfection be the the goal that has to be reached in terms of how we regulate these things. Could it be that it's just way better than what we're currently doing? And as a result there should be, some have said, maybe there's an imperative to actually use these things because of how much better they could be. And I think that balance has to be found between improvements in care as well as some of the identifiable risks that we're willing to accept, or how we can control those risks in some ways. And so I think this is a real challenge with things like some of the tools that are being put in place to support cars and how people drive cars, right? These same tools might in fact make for overall safer driving, but when we see accidents happen with those tools on, we all get very alarmed because, oh my god, did the machine lead to this problem? And I think this is a difficult challenge that society has to figure out that right balance, that we probably can't eliminate all risks, but these tools can perhaps help us to eliminate some of the risks that are not even always visible to us today in how we practice medicine. And I think that's really some of the discussion that we're having at NAM about how to make those things visible and to figure out what, then, the right safeguards to put in place. What's the role of government around that, what's the role of private actors, as well? And I brought up a model earlier with you about thinking about things like our National Transportation Safety Board where there is a federal oversight that kind of brings together the information, but it also requires private entities, in that case, private airlines, in our case it would be private health systems, sharing data in a way that allows for everyone to learn from it but isn't done in a way that in fact introduce new risks, legal risks, or things that people would get sued, or be put in a competitive disadvantage, because they're sharing the information. And I would love to see thoughts about that kind of structure put forward because this is such an important area that we all need to learn together. And I think there's an opportunity to think about that kind of a model.

- So it sounds like these technologies are gonna be disruptive of so many sectors, the types of regulatory structures are also. We're gonna have to think a little bit differently about them as well. Very interesting. The last thing I wanna just ask you, so JAMA recently put out a call for papers on AI in clinical practice and AI in medicine. You have a long history as a researcher bringing data to bear on on these important questions. What types of studies would you want to be looking for that would help us as we continue to move forward?

- Yeah, well first of all, I'm so glad you're doing this at JAMA. I think it is so timely, and I know it's something I will be an avid reader of these kinds of articles. In fact, I just recently saw a great call related to rethinking things like risk adjustment and the use of AI and how that could be helpful in that space. And I just think that thinking creatively about all these applications and how it might get used is really an exciting thing for JAMA to create a forum around for us all to be learning collectively around this. So, it really runs the gamut, Kirsten, from some of the things that can go on in what I sort of call the back office, right? Like how do we more effectively do some of the documentation, some of the logistics around more effectively steering messaging that might be coming in from our patients in different kinds of inquiries, helping patients to steer to the right level of care based on the kinds of problems or symptoms that they might be interacting with some of these tools to help us more efficiently find the resource. I mean, you and I as primary care physicians have personally provided that role for many of our patients over the years, like, okay, we're the first stop, and now we're gonna help you navigate this system. Wouldn't it be amazing to have these tools be able to provide some of that navigation, which we all know how complicated our health systems are. So I think there's a tremendous amount of opportunity in that kind of space. And then clearly you could run all the way up to the diagnostic end and the treatment end, thinking about, gee, how do we create better alignment between evidence and putting it in into practice? And to me that's where there's just so much promise of, we're always striving for innovation and healthcare, and as you and I know, what's so unfortunate, is that we have so many innovations that we kind of don't even fully implement and give everyone the chance to benefit from them. So I'd love to see work related to creating better alignment between things that can help patients and getting them those treatments. So how can these tools be better at finding potential patients who could benefit from these things and offering that and then learning the difference it made on their health outcomes? So these prediction tools are super important. And then we've gotta learn about, again, the experience too, right? We need to understand, how do patients feel about these things? How do our doctors feel about them, right? Everyone still has to be in the arena of care and understanding how care is changing for them. And we need to listen really clearly to our patients about their experience. I was so struck in, I think it was in JAMA Internal Medicine, a study that was about empathy and how patients, the responses to how AI generated answers to questions versus those that physicians provided. And we got real insights there into, wow, there could be real power in the ability for AI to assist us to give the kinds of answers that are really meaningful to patients. So I think that's an incredibly important area, communication and how we can become more efficient and better at that in what we're doing. So, I mean, there are so many areas that I could go into, but I just think this is a tremendous opportunity. But I hope we will continue to think about all the stakeholders in the environment. That's the patients, the clinicians, the nursing staff, all of our health professionals involved in healthcare delivery, and everyone who supports that work, and think about how these tools can be used to support their work, to make work better, to ultimately lead to better care experiences and health outcomes for all of our patients.

- Thank you, Andy. This is such a rapidly evolving area. It will be interesting to see how it evolves and I think being able to bring people together to have these conversations and to continue to steer it in the direction, hopefully for better, better care for patients and environments for healthcare teams to work together, I think is a goal we all share. Thank you so much for joining me today for this conversation.

- Oh, thanks so much.

- Thank you for watching and listening. We welcome comments on this series. We also welcome submissions in response to JAMA's AI in medicine call for papers. Until next time, stay informed and stay inspired. We hope you'll join us for future episodes of the AI in Clinical Practice series where we will continue to discuss the opportunities and challenges posed by AI. Subscribe to the JAMA Network YouTube channel, and follow JAMA Network Podcasts wherever you get your podcasts.


Name Your Search

Save Search

Lookup An Activity


My Saved Searches

You currently have no searches saved.


My Saved Courses

You currently have no courses saved.