[Skip to Content]
[Skip to Content Landing]

Quantifying and Interpreting the Prediction Accuracy of Models for the Time of a Cardiovascular Event—Moving Beyond C StatisticA Review

To identify the key insights or developments described in this article
1 Credit CME
Abstract

Importance  For personalized or stratified medicine, it is critical to establish a reliable and efficient prediction model for a clinical outcome of interest. The goal is to develop a parsimonious model with fewer predictors for broad future application without compromising predictability. A general approach is to construct various empirical models via individual patients’ specific baseline characteristics/biomarkers and then evaluate their relative merits. When the outcome of interest is the timing of a cardiovascular event, a commonly used metric to assess the adequacy of the fitted models is based on C statistics. These measures quantify a model’s ability to separate those who develop events earlier from those who develop them later or not at all (discrimination), but they do not measure how closely model estimates match observed outcomes (prediction accuracy). Metrics that provide clinically interpretable measures to quantify prediction accuracy are needed.

Observations  C statistics measure the concordance between the risk scores derived from the model and the observed event time observations. However, C statistics do not quantify the model prediction accuracy. The integrated Brier Score, which calculates the mean squared distance between the empirical cumulative event-free curve and its individual patient’s counterparts, estimates the prediction accuracy, but it is not clinically intuitive. A simple alternative measure is the average distance between the observed and predicted event times over the entire study population. This metric directly quantifies the model prediction accuracy and has often been used to evaluate the goodness of fit of the assumed models in settings other than survival data. This time-scale measure is easier to interpret than the C statistics or the Brier score.

Conclusions and Relevance  This article enhances our understanding of the model selection/evaluation process with respect to prediction accuracy. A simple, intuitive measure for quantifying such accuracy beyond C statistics can improve the reliability and efficiency of the selected model for personalized and stratified medicine.

Sign in to take quiz and track your certificates

Buy This Activity
Our websites may be periodically unavailable between 12:00am CT March 25, 2023 and 4:00pm CT March 26, 2023 for regularly scheduled maintenance.

JN Learning™ is the home for CME and MOC from the JAMA Network. Search by specialty or US state and earn AMA PRA Category 1 Credit(s)™ from articles, audio, Clinical Challenges and more. Learn more about CME/MOC

CME Disclosure Statement: Unless noted, all individuals in control of content reported no relevant financial relationships. If applicable, all relevant financial relationships have been mitigated.

Article Information

Accepted for Publication: December 1, 2022.

Published Online: February 1, 2023. doi:10.1001/jamacardio.2022.5279

Corresponding Author: Lee-Jen Wei, PhD, Department of Biostatistics, Harvard T.H. Chan School of Public Health, Harvard University, 655 Huntington Ave, Boston, MA 02115 (wei@hsph.harvard.edu).

Author Contributions: Drs Wang and Claggett had full access to the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Drs Wang and Claggett contributed equally as co–first authors.

Concept and design: Wang, Claggett, Tian, Pfeffer, Wei.

Acquisition, analysis, or interpretation of data: Wang, Claggett, Malachias, Wei.

Drafting of the manuscript: Wang, Claggett, Tian, Wei.

Critical revision of the manuscript for important intellectual content: Wang, Claggett, Malachias, Pfeffer, Wei.

Statistical analysis: Wang, Claggett, Tian, Malachias, Wei.

Administrative, technical, or material support: Wang, Wei.

Supervision: Wei.

Conflict of Interest Disclosures: Dr Claggett reported receiving consulting fees from Cardurion, Corvia, and Novartis outside the submitted work. Dr Malachias reported receiving lecture fees from Bayer, Boehringer Ingelheim, Novo Nordisk, Daiichi-Sankyo, Novartis, and Libbs outside the submitted work. Dr Pfeffer reported receiving grants from Novartis; personal fees from Alnylam, AstraZeneca, Boehringer Ingelheim, Eli Lilly Alliance, Corvidia, DalCor, GlaxoSmithKline, Lexicon, the National Heart, Lung, and Blood Institute’s Collaborating Network of Networks for Evaluating COVID-19 and Therapeutic Strategies (CONNECTS), Novartis, Novo Nordisk, Peerbridge, and Sanofi; and stock options from DalCor outside the submitted work. Dr Wei did not receive consulting fees for this research project. No other disclosures were reported.

Funding/Support: This research was partially supported by grant R01HL089778 from the US National Institutes of Health (Dr Tian).

Role of the Funder/Sponsor: The funder had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Additional Contributions: We thank Robert O. Bonow, MD, Editor, JAMA Cardiology, and Michael J. Pencina, PhD, Deputy Editor for Statistics, JAMA Cardiology, and reviewers for their insightful, extensive comments/suggestions on the manuscript. No one was financially compensated for their contribution.

References
1.
Brajer  N , Cozzi  B , Gao  M ,  et al.  Prospective and external evaluation of a machine learning model to predict in-hospital mortality of adults at time of admission.   JAMA Netw Open. 2020;3(2):e1920733.PubMedGoogle Scholar
2.
Malachias  MVB , Jhund  PS , Claggett  BL ,  et al.  NT-proBNP by itself predicts death and cardiovascular events in high-risk patients with type 2 diabetes mellitus.   J Am Heart Assoc. 2020;9(19):e017462.PubMedGoogle Scholar
3.
Hall  WJ , Wellner  JA .  Confidence bands for a survival curve from censored data.   Biometrika. 1980;67(1):133-143. doi:10.2307/2335326Google ScholarCrossref
4.
Lin  DY , Fleming  TR , Wei  LJ .  Confidence bands for survival curves under the proportional hazards model.   Biometrika. 1994;81(1):73-81. doi:10.2307/2337051Google ScholarCrossref
5.
Ozenne  B , Sørensen  AL , Scheike  T , Torp-Pedersen  C , Gerds  TA .  riskRegression: predicting the risk of an event using Cox regression models.   R J. 2017;9(2):440-460.Google ScholarCrossref
6.
Uno  H , Tian  L , Cai  T , Kohane  IS , Wei  LJ .  A unified inference procedure for a class of measures to assess improvement in risk prediction systems with survival data.   Stat Med. 2013;32(14):2430-2442.PubMedGoogle ScholarCrossref
7.
Collins  GS , de Groot  JA , Dutton  S ,  et al.  External validation of multivariable prediction models: a systematic review of methodological conduct and reporting.   BMC Med Res Methodol. 2014;14:40.PubMedGoogle ScholarCrossref
8.
Pencina  MJ , D’Agostino  RB .  Overall C as a measure of discrimination in survival analysis: model specific population value and confidence interval estimation.   Stat Med. 2004;23(13):2109-2123. doi:10.1002/sim.1802PubMedGoogle Scholar
9.
Pencina  MJ , D’Agostino  RB  Sr .  Evaluating discrimination of risk prediction models: the C statistic.   JAMA. 2015;314(10):1063-1064. doi:10.1001/jama.2015.11082PubMedGoogle Scholar
10.
Harrell  FE  Jr , Lee  KL , Mark  DB .  Multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors.   Stat Med. 1996;15(4):361-387. doi:10.1002/(SICI)1097-0258(19960229)15:4<361:AID-SIM168>3.0.CO;2-4PubMedGoogle Scholar
11.
Uno  H , Cai  T , Pencina  MJ , D’Agostino  RB , Wei  LJ .  On the C statistics for evaluating overall adequacy of risk prediction procedures with censored survival data.   Stat Med. 2011;30(10):1105-1117. doi:10.1002/sim.4154PubMedGoogle Scholar
12.
Tian  L , Cai  T , Goetghebeur  E , Wei  LJ .  Model evaluation based on the sampling distribution of estimated absolute prediction error.   Biometrika. 2007;94(2):297-311. doi:10.1093/biomet/asm036Google Scholar
13.
Yong  FH , Tian  L , Yu  S , Cai  T , Wei  LJ .  Optimal stratification in outcome prediction using baseline information.   Biometrika. 2016;103(4):817-828.PubMedGoogle Scholar
14.
D’Agostino  RB , Nam  BH . Evaluation of the performance of survival analysis models: discrimination and calibration measures. In: Balakrishnan  N , Rao  CR , eds.  Advances in Survival Analysis. Elsevier; 2003:1-25.
15.
Demler  OV , Paynter  NP , Cook  NR .  Tests of calibration and goodness-of-fit in the survival setting.   Stat Med. 2015;34(10):1659-1680. doi:10.1002/sim.6428PubMedGoogle Scholar
16.
Tian  L , Zhao  L , Wei  LJ .  Predicting the restricted mean event time with the subject’s baseline covariates in survival analysis.   Biostatistics. 2014;15(2):222-233. doi:10.1093/biostatistics/kxt050PubMedGoogle Scholar
17.
McCaw  ZR , Yin  G , Wei  LJ .  Using the restricted mean survival time difference as an alternative to the hazard ratio for analyzing clinical cardiovascular studies.   Circulation. 2019;140(17):1366-1368. doi:10.1161/CIRCULATIONAHA.119.040680PubMedGoogle Scholar
18.
Pak  K , Uno  H , Kim  DH ,  et al.  Interpretability of cancer clinical trial results using restricted mean survival time as an alternative to the hazard ratio.   JAMA Oncol. 2017;3(12):1692-1696. doi:10.1001/jamaoncol.2017.2797PubMedGoogle Scholar
19.
Github. L1RMSTCOX. Accessed October 9, 2021. https://github.com/wx202/L1RMSTCOX.git
20.
Efron  B .  How biased is the apparent error rate of a prediction rule?   J Am Stat Assoc. 1986;81(394):461-470. doi:10.2307/2289236Google Scholar
21.
Cheng  SC , Wei  LJ , Ying  Z .  Predicting survival probabilities with semiparametric transformation models.   J Am Stat Assoc. 1997;92(437):227-235. doi:10.1080/01621459.1997.10473620Google Scholar
22.
Redelmeier  DA , Bloch  DA , Hickam  DH .  Assessing predictive accuracy: how to compare Brier scores.   J Clin Epidemiol. 1991;44(11):1141-1146. doi:10.1016/0895-4356(91)90146-zPubMedGoogle Scholar
23.
Berger  JO , Pericchi  LR , Ghosh  JK , Samanta  T , De Santis  F , Berger  JO , Pericchi  LR . Objective Bayesian methods for model selection: introduction and comparison. Accessed January 1, 2001. https://projecteuclid.org/ebook/Download?urlid=10.1214%2Flnms%2F1215540968&isFullBook=False
24.
Taylor  JM , Park  Y , Ankerst  DP ,  et al.  Real-time individual predictions of prostate cancer recurrence using joint models.   Biometrics. 2013;69(1):206-213. doi:10.1111/j.1541-0420.2012.01823.xPubMedGoogle Scholar
25.
Bansal  A , Heagerty  PJ .  A tutorial on evaluating the time-varying discrimination accuracy of survival models used in dynamic decision-making.   Med Decis Making. 2018;38(8):904-916. doi:10.1177/0272989X18801312PubMedGoogle Scholar
AMA CME Accreditation Information

Credit Designation Statement: The American Medical Association designates this Journal-based CME activity activity for a maximum of 1.00  AMA PRA Category 1 Credit(s)™. Physicians should claim only the credit commensurate with the extent of their participation in the activity.

Successful completion of this CME activity, which includes participation in the evaluation component, enables the participant to earn up to:

  • 1.00 Medical Knowledge MOC points in the American Board of Internal Medicine's (ABIM) Maintenance of Certification (MOC) program;;
  • 1.00 Self-Assessment points in the American Board of Otolaryngology – Head and Neck Surgery’s (ABOHNS) Continuing Certification program;
  • 1.00 MOC points in the American Board of Pediatrics’ (ABP) Maintenance of Certification (MOC) program;
  • 1.00 Lifelong Learning points in the American Board of Pathology’s (ABPath) Continuing Certification program; and
  • 1.00 CME points in the American Board of Surgery’s (ABS) Continuing Certification program

It is the CME activity provider's responsibility to submit participant completion information to ACCME for the purpose of granting MOC credit.

Close
Want full access to the AMA Ed Hub?
After you sign up for AMA Membership, make sure you sign in or create a Physician account with the AMA in order to access all learning activities on the AMA Ed Hub
Buy this activity
Close
Want full access to the AMA Ed Hub?
After you sign up for AMA Membership, make sure you sign in or create a Physician account with the AMA in order to access all learning activities on the AMA Ed Hub
Buy this activity
Close
With a personal account, you can:
  • Access free activities and track your credits
  • Personalize content alerts
  • Customize your interests
  • Fully personalize your learning experience
Education Center Collection Sign In Modal Right
Close

Name Your Search

Save Search
With a personal account, you can:
  • Access free activities and track your credits
  • Personalize content alerts
  • Customize your interests
  • Fully personalize your learning experience
Close
Close

Lookup An Activity

or

My Saved Searches

You currently have no searches saved.

Close

My Saved Courses

You currently have no courses saved.

Close