Want to take quizzes and track your credits?
How does a deep learning system compare with professional human graders in detecting glaucomatous optic neuropathy?
In this cross-sectional study, the deep learning system showed a sensitivity and specificity of greater than 90% for detecting glaucomatous optic neuropathy in a local validation data set, in 3 clinical-based data sets, and in a real-world distribution data set. The deep learning system showed lower sensitivity when tested in multiethnic and website-based data sets.
This assessment of fundus images suggests that deep learning systems can provide a tool with high sensitivity and specificity that might expedite screening for glaucomatous optic neuropathy.
A deep learning system (DLS) that could automatically detect glaucomatous optic neuropathy (GON) with high sensitivity and specificity could expedite screening for GON.
To establish a DLS for detection of GON using retinal fundus images and glaucoma diagnosis with convoluted neural networks (GD-CNN) that has the ability to be generalized across populations.
Design, Setting, and Participants
In this cross-sectional study, a DLS for the classification of GON was developed for automated classification of GON using retinal fundus images obtained from the Chinese Glaucoma Study Alliance, the Handan Eye Study, and online databases. The researchers selected 241 032 images were selected as the training data set. The images were entered into the databases on June 9, 2009, obtained on July 11, 2018, and analyses were performed on December 15, 2018. The generalization of the DLS was tested in several validation data sets, which allowed assessment of the DLS in a clinical setting without exclusions, testing against variable image quality based on fundus photographs obtained from websites, evaluation in a population-based study that reflects a natural distribution of patients with glaucoma within the cohort and an additive data set that has a diverse ethnic distribution. An online learning system was established to transfer the trained and validated DLS to generalize the results with fundus images from new sources. To better understand the DLS decision-making process, a prediction visualization test was performed that identified regions of the fundus images utilized by the DLS for diagnosis.
Use of a deep learning system.
Main Outcomes and Measures
Area under the receiver operating characteristics curve (AUC), sensitivity and specificity for DLS with reference to professional graders.
From a total of 274 413 fundus images initially obtained from CGSA, 269 601 images passed initial image quality review and were graded for GON. A total of 241 032 images (definite GON 29 865 [12.4%], probable GON 11 046 [4.6%], unlikely GON 200 121 [83%]) from 68 013 patients were selected using random sampling to train the GD-CNN model. Validation and evaluation of the GD-CNN model was assessed using the remaining 28 569 images from CGSA. The AUC of the GD-CNN model in primary local validation data sets was 0.996 (95% CI, 0.995-0.998), with sensitivity of 96.2% and specificity of 97.7%. The most common reason for both false-negative and false-positive grading by GD-CNN (51 of 119 [46.3%] and 191 of 588 [32.3%]) and manual grading (50 of 113 [44.2%] and 183 of 538 [34.0%]) was pathologic or high myopia.
Conclusions and Relevance
Application of GD-CNN to fundus images from different settings and varying image quality demonstrated a high sensitivity, specificity, and generalizability for detecting GON. These findings suggest that automated DLS could enhance current screening programs in a cost-effective and time-efficient manner.
Sign in to take quiz and track your certificates
JN Learning™ is the home for CME and MOC from the JAMA Network. Search by specialty or US state and earn AMA PRA Category 1 CME Credit™ from articles, audio, Clinical Challenges and more. Learn more about CME/MOC
Accepted for Publication: July 14, 2019.
Published Online: September 12, 2019. doi:10.1001/jamaophthalmol.2019.3501
Correction: This article was corrected on December 1, 2019, to fix an error in the byline.
Corresponding Authors: Ningli Wang, MD, PhD, Beijing Tongren Hospital, Capital Medical University; Beijing Institute of Ophthalmology, No.1 Dongjiaominxiang Street, Dongcheng District, Beijing 100730, China (email@example.com); Mai Xu, PhD, School of Electronic and Information Engineering, Beihang University, Beijing 100191, China (firstname.lastname@example.org).
Author Contributions: Drs H. Liu, M. Xu, and N. Wang had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Drs M. Xu and N. Wang contributed equally to this work.
Concept and design: H. Liu, Qiao, Zhang, H. Wang, Yang, Moghimi, Lai, Hu, Y. Xu, Kang, Ji, Tham, Ting, Wong, Z. Wang, M. Xu, N. Wang.
Acquisition, analysis, or interpretation of data: H. Liu, L. Liu, Wormstone, Qiao, P. Liu, Li, H. Wang, Mou, Pang, Yang, Zangwill, Hou, Bowd, Chen, Chang, Tham, Cheung, Wong, Weinreb, M. Xu.
Drafting of the manuscript: H. Liu, L. Liu, Qiao, Zhang, Li, Pang, Chen, Tham, M. Xu.
Critical revision of the manuscript for important intellectual content: H. Liu, Wormstone, Qiao, P. Liu, H. Wang, Mou, Yang, Zangwill, Moghimi, Hou, Bowd, Lai, Hu, Y. Xu, Kang, Ji, Chang, Tham, Cheung, Ting, Wong, Z. Wang, Weinreb, M. Xu, N. Wang.
Statistical analysis: H. Liu, L. Liu, Pang, Lai, Chen, Tham, Wong, M. Xu.
Obtained funding: H. Liu, Mou, Pang, Zangwill, Chen, Weinreb.
Administrative, technical, or material support: L. Liu, Qiao, Zhang, P. Liu, Li, H. Wang, Yang, Zangwill, Moghimi, Hou, Bowd, Chen, Hu, Y. Xu, Kang, Ji, Tham, Cheung, Z. Wang, Weinreb, M. Xu.
Supervision: P. Liu, Chen, Tham, Ting, Wong, Z. Wang, M. Xu, N. Wang.
Conflict of Interest Disclosures: Dr Zangwill reports grants from the National Eye Institute during the conduct of the study and research and equipment support from Heidelberg Engineering, Optovue, Carl Zeiss Meditec, and Topcon. Dr Ting reported having a patent pending for a deep learning system for retinal diseases, not related to this work. Dr Wong reported receiving personal fees from Allergan, personal fees from Bayer, personal fees from Boehringer Ingelheim, personal fees from Genentech, personal fees from Merck, personal fees from Novartis, personal fees from Oxurion, and personal fees from Roche outside the submitted work and he is a shareholder in Plano and EyRIS. No other disclosures were reported.
Funding/Support: The research has received funding from the National Natural Science Fund Projects of China (81271005), Beijing Municipal Administration of Hospitals Qingmiao Projects (QMS20180210), the Priming Scientific Research Foundation for the Junior Researcher in Beijing Tongren Hospital (Dr H. Liu; 2016-YJJ-ZZL-021), Beijing Tongren Hospital Top Talent Training Program, and Medical Synergy Science and Technology Innovation Research (Z181100001918035).
Role of the Funder/Sponsor: The funding organizations had no role in design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
You currently have no searches saved.