[Skip to Content]
[Skip to Content Landing]

Decision Aids, Doorknob Moments, and Physician-Patient Solidarity in EDs

Learning Objectives
1. Explain a new or unfamiliar viewpoint on a topic of ethical or professional conduct
2. Evaluate the usefulness of this information for health care practice, teaching, or conduct
3. Decide whether and when to apply the new information to health care practice, teaching, or conduct
1 Credit CME
Abstract

Potential benefits of decision aids and technology, such as artificial intelligence, used at the bedside are many and significant. Like any tools, they must be used appropriately for specific tasks, since even validated decision aids have limited utility when they are misapplied, overly relied upon, or used as a substitute for thinking carefully about clinically and ethically relevant questions. Patients are more than data points in human form, as they come to emergency departments with stories. As technology casts ever-lengthening shadows over patient-clinician interactions, a key question is: How should clinicians cultivate relationships with technology so it functions in solidarity with patients?

Case

Something does not sit right about this emergency department (ED) patient. His asthma flare-ups have brought him to the ED multiple times for shortness of breath, despite escalating asthma medication interventions by his pulmonologist. The emergency medicine intern, who is a few months into her training, listens to his lungs. He's wheezing, but mildly. His symptoms could fit an asthma exacerbation, but he appears comfortable. Does asthma explain his visit to the ED? Could it be a pulmonary embolism (PE)? What about risk factors for heart disease?

The intern knows a senior resident physician will ask for her differential diagnosis. The patient's heart rate is over 100, so a common decision tool validated for use with patients at low risk will not rule out PE.1 She considers ordering a d-dimer to further stratify his risk for PE. He could have underlying pneumonia, so she orders a chest x-ray. There's also a clinical decision rule for assessing the 6-week likelihood that a patient like this one (eg, with symptoms suggesting an acute cardiac syndrome) will experience an adverse cardiac event.2

The intern finishes examining the patient. Now about to leave, with her hand on the doorknob, she asks, “Sir, what do you think is causing your shortness of breath? He responds, “My neighbors have been trying to kill me with poisonous vapors in the air conditioning vents.”

Commentary

Quiz Ref IDThis case involves a patient, uncertainty, and a “doorknob moment.” Typically, we think of doorknob moments as occurring when patients spring new, important, and sometimes embarrassing information on their physician when he or she is about to leave. Here, however, we have a physician doorknob moment: just when she is about to leave, the intern pivots away from the previous diagnostic roads, opening up a new line of inquiry with different forms of data.

In this encounter, the intern correctly considers possible poor outcomes for the patient—PE and acute cardiac syndrome—and leans on risk-stratification aids to rule out the most life-threatening possibilities. This is good medicine: decision aids are typically constructed and validated on the backs of large, well-designed research studies and, as such, can help us appropriately stratify patients' risk. We heavily rely on the scores and rules these aids provide to guide disposition decisions, consider interventions, and communicate with patients.

Quiz Ref IDBut decision aids are tools, and, like any tools, they must be used appropriately for the task at hand. Even prospectively validated decision aids have limited utility if our approach is off the mark, if we ask the tool to answer the wrong question, or if we rely on these tools to define our questions. When considering the role of decision aids and artificial intelligence (AI) in making decisions in the ED, we must ask: What do we want this technology to do for us? Are we using it to replace our thinking or augment it, to process data faster, or to provide something to hold onto when we find ourselves floating in uncertainty? In this situation, the real source of the patient's distress—severe, uncontrolled mental illness—was gleaned in part by the resident's curiosity about the human experience. How might the intern have better employed the tools at her disposal—along with her empathy and curiosity about the human condition—to better build solidarity with her patient?

Decision Aids Cannot Structure Narratives

Quiz Ref IDIn the scenario above, the intern's first inclination was to respect the evidence-based approach that pervades medical education. However, her experience also underlines why we cannot simply apply decision rules without understanding when they are most helpful. Decision aids aim to cut through noise to elements of value. But making decisions about what counts as valuable data is a decision we make before we think we are making decisions.

Consider the challenges faced by AI systems, such as IBM's Watson, which was designed to tackle health care diagnoses.3 The challenge Watson's designers have faced is not that Watson cannot come to a diagnosis with all the proper inputs (diarrhea + vomiting + 4 other sick relatives after a family barbeque = food poisoning), but rather that it is difficult for the new Watson system and machines like it to generate the appropriate narrative questions to “ask” patients (“Did other people at this barbeque also vomit?”) In part, this difficulty arises because, unlike human brains, AI is not very good at imagining counterfactuals4 or emulating empathy. In other words, AI is not very good at sensing what is not there or when something is not right. AI is not capable of doorknob moments—at least not yet.

Using Decision Aids in Solidarity With Patients

Patients come to the ED with stories to tell. They're not data in human form ready to be entered into an algorithm. But finding time to listen to their stories is challenging, given that we face pressures to move as many patients as possible through the ED. Against time constraints, we are expected to be both fast and accurate, to satisfy the reductionist tendency in medicine to “find” an answer. Under the weight of competing expectations, we can be tempted to use decision aids inappropriately as substitutes for time spent with patients or as blunt instruments to simplify problems that are not simple.

This approach is problematic because, although we like to believe we reason inductively based on data and evidence, we know the human brain is in fact prone to deductive shortcuts, quickly reaching a decision and then finding the evidence retrospectively to support it.5 In the ED, where decisions are often made on ambiguous, incomplete, and scattered information, what we choose to consider as evidence is subject to conscious and unconscious forces. And why and how we make these choices isn't often interrogated.

Moreover, if we choose to concentrate too heavily on objective data, we risk excluding the subjective data that might reveal to us what matters most to the patient. Studies show that patients often cloak their true worries. They want doctors to probe what is not said, what might be lurking between the lines.68 They want doctors to be interested not only in diagnostics, but also in questions of why and how long will this last and what does this mean for me.9 For physicians in overwhelming clinical environments, diagnostic tools used wrongly may become not an aid but a distraction, detracting from other forms of meaningful inquiry in the patient-physician dialogue.10

Wayfinding Together

Quiz Ref IDAuthors have used the metaphor of “wayfinding” to describe the complex diagnostic process that physicians and patients face together and the role AI that may play in it.11 In this journey, data are acquired, next steps are pursued, new information is generated and interpreted, and so on. This is an iterative process, in which different questions arise and various destinations become clearer along the way. We, as human clinicians, drive this diagnostic journey—assisted, when appropriate, by technological aids.

But what might we lose if we become overly reliant on such aids? What alarm bells might we silence that would otherwise have cued us to change course or seek out data that seems missing? When swimming in uncertainty, how can we avoid pivoting to recognizable data points out of ease and instead use our uncertainty to trigger doorknob moments?

Consider navigation tools such as Google Maps. Before these tools, navigation was a stressful process, invoking anxiety, wrong turns, and arguments among loved ones. With these tools, navigation is easier and more accurate. But we have given up the value of being lost, the discovery to be had by finding our own way. We pay attention differently to the physical terrain when it serves as our guideposts, when it is the source of our focus and not just passing scenes outside our window. We also develop a more intimate relationship with our own mental landscape. Why did I doubt that left turn? Why was I so sure about the right at the light? Few of us would give up our navigation technology. But what we lose is our relationship with the process of navigation. There is value in friction and inefficiency, in having to think about how we are thinking under conditions of uncertainty. This is especially true in clinical decision making, in which the destination is unknown and the process of discovery requires first identifying the signposts for our journey.

Cultivating Stillness Amid Chaos

It is easy to say that clinicians must create mental space and find doorknob moments of stillness in their practice. The reality is that it is difficult to do so in clinical medicine, especially in the ED, where time is short and interruptions and distractions are rampant. It takes mental energy for the brain to switch between tasks and regain focus, only to be interrupted again. The psychology literature illustrates the demanding cognitive toll of making simultaneous decisions under time pressures.12,13

The result is the innately inhuman way we move from patient to patient, with patients' entire and unique lived experiences distilled to a chief complaint and data points. Even when a patient dies, we might pay our respects with a few seconds of silence and then move on. The first author (E.S.) tells her nonmedical friends that finding moments of stillness is like iceberg hopping—even when you land a moment of stillness with a patient, you already begin to feel the ice slowly melting beneath you.

As technology casts ever-lengthening shadows over clinical interactions, the question becomes: How can we redefine our relationship with technology and decision aids so that they remain adjuncts to our relationships with patients rather than obstructions? How can we use our tools such that they help us climb up our diagnostic ladders quickly, without forgetting to check that they are up against the right walls? Quiz Ref IDThe ideal use of clinical decision aides requires outsourcing algorithmic-like knowledge to apps or other tools and off-loading some of our cognitive burdens, thereby making more time and space for meaningful interactions with patients. In essence, forging solidarity and stillness with patients requires developing solidarity with our technology.

Creating Space for Uncertainty

Developing comfort with stillness and solidarity is one function of the arts in medical education. The arts, for example, challenge us to become comfortable with uncertainty, instability, and constraints; to pay attention to what is said and unsaid; and to recognize how our emotions influence how we think14 and what we notice. The arts value what we do not know as a ripe area for inquiry, valuing problem finding as much as problem solving. The arts involve a constant interrogation of intention, methods, and form.

Despite what we might think, physicians are makers, too. We make diagnoses and prognosticate, develop treatment plans, and build trust.15 At first glance, the arts and technology might appear to be strange bedfellows, but the arts might serve an important role in fostering stillness and solidarity with our patients through use of our technologies in emergency medicine. They could do this by cultivating humility, a greater comfort with uncertainty, and an openness to various possibilities. The arts can encourage us to have doorknob moments, even when it feels like we are standing on an iceberg. They might help us recognize when we need help from technology and when the human brain and heart are enough.

Sign in to take quiz and track your certificates

Our websites may be periodically unavailable between 7:00pm CT December 9, 2023 and 1:00am CT December 10, 2023 for regularly scheduled maintenance.

The AMA Journal of Ethics exists to help medical students, physicians and all health care professionals navigate ethical decisions in service to patients and society. The journal publishes cases and expert commentary, medical education articles, policy discussions, peer-reviewed articles for journal-based, video CME, audio CME, visuals, and more. Learn more

Article Information

AMA Journal of Ethics

AMA J Ethics. 2022;24(12):E1129-1134.

AMA CME Accreditation Information

Credit Designation Statement: The American Medical Association designates this journal-based CME activity for a maximum of 1.0 AMA PRA Category 1 Credit(s)™. Physicians should claim only the credit commensurate with the extent of their participation in the activity.

CME Disclosure Statement: Unless noted, all individuals in control of content reported no relevant financial relationships.

If applicable, all relevant financial relationships have been mitigated.

Editor's Note

The case to which this commentary is a response was developed by the editorial staff.

Conflict of Interest Disclosure: The author(s) had no conflicts of interest to disclose.

The people and events in this case are fictional. Resemblance to real events or to names of people, living or dead, is entirely coincidental. The viewpoints expressed in this article are those of the author(s) and do not necessarily reflect the views and policies of the AMA.

Author Information:

  • Emily Shearer, MD, MPP, MSc is a second-year emergency medicine resident at Brown University's Warren Alpert Medical School in Providence, Rhode Island. She received her medical degree from Stanford School of Medicine; a master's degree in health policy, planning and financing from the London School of Economics; and a master's degree in public policy from the University of Cambridge. Her interests include health policy, health economics, choice architecture, and decision making.

  • Jay Baruch, MD, is a professor of emergency medicine at Brown University's Warren Alpert Medical School in Providence, Rhode Island, where he also directs the Medical Humanities and Ethics Scholarly Concentration. His academic work focuses on the intersection of narrative, creativity, and decision- making. His latest book is Tornado of Life: A Doctor's Journey Through Constraints and Creativity in the ER (MIT Press, 2022).

References:
1.
Freund  Y, Cachanado  M, Aubry  A,  et al; PROPER Investigator Group.  Effect of the pulmonary embolism rule-out criteria on subsequent thromboembolic events among low-risk emergency department patients: the PROPER randomized clinical trial.  JAMA. 2018;319(6):559–566.Google ScholarCrossref
2.
Six  AJ, Backus  BE, Kelder  JC.  Chest pain in the emergency room: value of the HEART score.  Neth Heart J. 2008;16(6):191–196.Google ScholarCrossref
3.
Lohr  S.  What ever happened to IBM's Watson?  New York Times. July 17 , 2021. Accessed August 19, 2022. https://www.nytimes.com/2021/07/16/technology/what-happened-ibm-watson.htmlGoogle Scholar
4.
Cukier  K, Mayer-Schonberger  V, de Vericourt  F.  Framers: Human Advantage in an Age of Technology and Turmoil. Dutton; 2021.
5.
Tversky  A, Kahneman  D.  Judgment under uncertainty: heuristics and biases.  Science. 1974;185(4157):1124–1131.Google ScholarCrossref
6.
van den Ende  ES, Schouten  B, Kremers  MNT,  et al.  Understanding what matters most to patients in acute care in seven countries, using the flash mob study design.  BMC Health Serv Res. 2021;21(1):474.Google ScholarCrossref
7.
Houwen  J, Lucassen  PLBJ, Stappers  HW, Assendelft  PJJ, van Dulmen  S, Olde Hartman  TC.  Medically unexplained symptoms: the person, the symptoms and the dialogue.  Fam Pract. 2017;34(2):245–251.Google Scholar
8.
Levinson  W, Gorawara-Bhat  R, Lamb  J.  A study of patient clues and physician responses in primary care and surgical settings.  JAMA. 2000;284(8):1021–1027.Google ScholarCrossref
9.
Epstein  RM, Shields  CG, Meldrum  SC,  et al.  Physicians' responses to patients' medically unexplained symptoms.  Psychosom Med. 2006;68(2):269–276.Google ScholarCrossref
10.
Alami  H, Lehoux  P, Auclair  Y,  et al.  Artificial intelligence and health technology assessment: anticipating a new level of complexity.  J Med Internet Res. 2020;22(7):e17707.Google ScholarCrossref
11.
Adler-Milstein  J, Chen  JH, Dhaliwal  G.  Next-generation artificial intelligence for diagnosis: from predicting diagnostic labels to “wayfinding.”  JAMA. 2021;326(24):2467–2468.Google ScholarCrossref
12.
Chisholm  CD, Collison  EK, Nelson  DR, Cordell  WH.  Emergency department workplace interruptions: are emergency physicians “interrupt-driven” and “multitasking”?  Acad Emerg Med. 2000;7(11):1239–1243.Google ScholarCrossref
13.
Edwards  MB, Gronlund  SD.  Task interruption and its effects on memory.  Memory. 1998;6(6):665–687.Google ScholarCrossref
14.
Baruch  J, Springs  S, Poterack  A, Blythe  SG.  What Cy Twombly's art can teach us about patients' stories.  AMA J Ethics. 2020;22(5):E430–E436.Google Scholar
15.
Baruch  JM.  Doctors as makers.  Acad Med. 2017;92(1):40–44.Google ScholarCrossref
AMA CME Accreditation Information

Credit Designation Statement: The American Medical Association designates this Journal-based CME activity activity for a maximum of 1.00  AMA PRA Category 1 Credit(s)™. Physicians should claim only the credit commensurate with the extent of their participation in the activity.

Successful completion of this CME activity, which includes participation in the evaluation component, enables the participant to earn up to:

  • 1.00 Medical Knowledge MOC points in the American Board of Internal Medicine's (ABIM) Maintenance of Certification (MOC) program;;
  • 1.00 Self-Assessment points in the American Board of Otolaryngology – Head and Neck Surgery’s (ABOHNS) Continuing Certification program;
  • 1.00 MOC points in the American Board of Pediatrics’ (ABP) Maintenance of Certification (MOC) program;
  • 1.00 Lifelong Learning points in the American Board of Pathology’s (ABPath) Continuing Certification program; and
  • 1.00 credit toward the CME of the American Board of Surgery’s Continuous Certification program

It is the CME activity provider's responsibility to submit participant completion information to ACCME for the purpose of granting MOC credit.

Close
Close
Close
Close

Name Your Search

Save Search
Close
Close

Lookup An Activity

or

My Saved Searches

You currently have no searches saved.

Close

My Saved Courses

You currently have no courses saved.

Close