Addressing Bias in Artificial Intelligence in Health Care | Health Disparities | JN Learning | AMA Ed Hub [Skip to Content]
[Skip to Content Landing]

Addressing Bias in Artificial Intelligence in Health Care

Educational Objectives Identify how AI can create or perpetuate biases and ways to address it
1 Credit CME

Recent scrutiny of artificial intelligence (AI)–based facial recognition software has renewed concerns about the unintended effects of AI on social bias and inequity. Academic and government officials have raised concerns over racial and gender bias in several AI-based technologies, including internet search engines and algorithms to predict risk of criminal behavior. Companies like IBM and Microsoft have made public commitments to “de-bias” their technologies, whereas Amazon mounted a public campaign criticizing such research. As AI applications gain traction in medicine, clinicians and health system leaders have raised similar concerns over automating and propagating existing biases.1

Sign in to take quiz and track your certificates

Buy This Activity

JN Learning™ is the home for CME and MOC from the JAMA Network. Search by specialty or US state and earn AMA PRA Category 1 CME Credit™ from articles, audio, Clinical Challenges and more. Learn more about CME/MOC

Article Information

Corresponding Author: Ravi B. Parikh, MD, MPP, Perelman School of Medicine, University of Pennsylvania, 423 Guardian Dr, Blockley 1102, Philadelphia, PA 19104 (ravi.parikh@pennmedicine.upenn.edu).

Published Online: November 22, 2019. doi:10.1001/jama.2019.18058

Conflict of Interest Disclosures: Dr Parikh reported receipt of personal fees from GNS Healthcare. Dr Navathe reported receipt of grants from the Hawaii Medical Service Association, the Anthem Public Policy Institute, the Commonwealth Fund, Oscar Health, Cigna Corporation, and the Donaghue Foundation and personal fees from Navvis Healthcare, Agathos Inc, University Health System (Singapore), Elsevier Press, Navahealth, the Cleveland Clinic, the Medicare Payment Advisory Commission, and Embedded Healthcare; he reported being an uncompensated board member for Integrated Services Inc. No other disclosures were reported.

Funding/Support: This work was supported in part by the Penn Center for Precision Medicine (Dr Parikh) and the Pennsylvania Universal Research Enhancement (CURE) Program and Robert Wood Johnson Foundation (Dr Navathe).

Role of the Funders/Sponsors: The funders had no role in the preparation, review, or approval of the manuscript or decision to submit the manuscript for publication.

References
1.
Gianfrancesco  MA, Tamang  S, Yazdany  J, Schmajuk  G.  Potential biases in machine learning algorithms using electronic health record data.  JAMA Intern Med. 2018;178(11):1544-1547. doi:10.1001/jamainternmed.2018.3763PubMedGoogle ScholarCrossref
2.
Gijsberts  CM, Groenewegen  KA, Hoefer  IE,  et al.  Race/ethnic differences in the associations of the Framingham risk factors with carotid IMT and cardiovascular events.  PLoS One. 2015;10(7):e0132321. doi:10.1371/journal.pone.0132321PubMedGoogle Scholar
3.
Canto  JG, Goldberg  RJ, Hand  MM,  et al.  Symptom presentation of women with acute coronary syndromes: myth vs reality.  Arch Intern Med. 2007;167(22):2405-2413. doi:10.1001/archinte.167.22.2405PubMedGoogle ScholarCrossref
4.
Agniel  D, Kohane  IS, Weber  GM.  Biases in electronic health record data due to processes within the healthcare system: retrospective observational study.  BMJ. 2018;361:k1479. doi:10.1136/bmj.k1479PubMedGoogle ScholarCrossref
5.
McCarthy  AM, Bristol  M, Domchek  SM,  et al.  Health care segregation, physician recommendation, and racial disparities in BRCA1/2 testing among women with breast cancer.  J Clin Oncol. 2016;34(22):2610-2618. doi:10.1200/JCO.2015.66.0019PubMedGoogle ScholarCrossref
6.
Parasuraman  R, Manzey  DH.  Complacency and bias in human use of automation: an attentional integration.  Hum Factors. 2010;52(3):381-410. doi:10.1177/0018720810376055PubMedGoogle ScholarCrossref
7.
Chen  IY, Szolovits  P, Ghassemi  M.  Can AI help reduce disparities in general medical and mental health care?  AMA J Ethics. 2019;21(2):E167-E179. doi:10.1001/amajethics.2019.167PubMedGoogle ScholarCrossref
8.
Wolff  RF, Moons  KGM, Riley  RD,  et al; PROBAST Group.  PROBAST: a tool to assess the risk of bias and applicability of prediction model studies.  Ann Intern Med. 2019;170(1):51-58. doi:10.7326/M18-1376PubMedGoogle ScholarCrossref
If you are not a JN Learning subscriber, you can either:
Subscribe to JN Learning for one year
Buy this activity
jn-learning_Modal_Multimedia_LoginSubscribe_Purchase
Close
If you are not a JN Learning subscriber, you can either:
Subscribe to JN Learning for one year
Buy this activity
jn-learning_Modal_Multimedia_LoginSubscribe_Purchase
Close
With a personal account, you can:
  • Access free activities and track your credits
  • Personalize content alerts
  • Customize your interests
  • Fully personalize your learning experience
Education Center Collection Sign In Modal Right
Close

Name Your Search

Save Search
Close
With a personal account, you can:
  • Track your credits
  • Personalize content alerts
  • Customize your interests
  • Fully personalize your learning experience
jn-learning_Modal_SaveSearch_NoAccess_Purchase
Close

Lookup An Activity

or

Close

My Saved Searches

You currently have no searches saved.

Close
With a personal account, you can:
  • Access free activities and track your credits
  • Personalize content alerts
  • Customize your interests
  • Fully personalize your learning experience
Education Center Collection Sign In Modal Right
Close