The rise of big data, advancements in computational power, cloud and edge computing has further fuelled the rise of artificial intelligence (AI) across all domains including healthcare. Within ophthalmology, several landmark papers have showed excellent diagnostic performance of deep learning in the detection of diseases from fundus photography.1,2 

While AI in ophthalmology began in posterior segment disease, there is increasing attention on the use of AI in anterior segment disease. In this article, we provide an overview on the AI applications in anterior segment including the future and clinical translation of these AI technologies in anterior segment diseases.


AI applications in anterior segment 

Cornea

(i) Imaging of the cornea

Accurate evaluation of the corneal endothelium is important for the diagnosis and monitoring of endothelial disease such as Fuchs’ endothelial dystrophy and bullous keratopathy. Current techniques to image the endothelium include in vivo confocal microscopy (IVCM) and specular microscopy. 

However, artifacts and illumination distortions often affect the quality of these images, rendering it difficult for interpretation. In addition, manual segmentation of endothelial cells is tedious and requires an experienced technician. Many AI models including the use of convolutional neural networks (CNN) and support vector machine have successfully performed segmentation in corneal endothelial images.3,4 

A recent study by Qu et al5 employed deep learning for fully automated segmentation and morphometric parameters of endothelial cells from IVCM. The AI model showed high correlation (Pearson’s correlation coefficient of 0.9447) with manual graders in the evaluation of abnormal endothelium images.

AI has also been used for the automated segmentation of sub-basal corneal nerve plexus from IVCM with good agreement between manual and fully automated analyses of corneal nerve fibre length and branch density.6 This will allow for fully automated corneal nerve quantification such as in the assessment of diabetic peripheral neuropathy.7

(ii) Keratoconus

Keratoconus is a progressive cornea ectasia characterised by cornea steepening and thinning inducing irregular astigmatism, myopia and significant visual impairment. Early diagnosis of keratoconus, specifically the detection of forme fruste keratoconus (subclinical disease) is crucial for early intervention to halt disease progression. 

However, diagnosis of subclinical or mild forms remains challenging. Several studies have reported CNN models with accuracy of up to 99.3% in the detection of keratoconus from corneal topography8 and accuracy of 97.6 to 99.3% from anterior segment optical coherence tomography (ASOCT).9

In recent times, the understanding that deficits in corneal structural integrity is a crucial component of keratoconus, has led to the increasing role of devices such as Corvis ST and Ocular Response Analyser in the measurement of corneal biomechanical properties for early diagnosis of keratoconus. 

A recent work by Abdelmotaal et al10 demonstrates the ability of a CNN model to detect keratoconus from deformation videos with area under the receiver operating curve (AUC) and accuracy of 0.93 and 0.88 respectively. 

Beyond diagnostics, AI algorithms have been shown to assist in clinical decision by predicting progression rate from corneal tomography and age, allowing for the identification of fast progressors requiring earlier treatment such as collagen cross linking.11,12 

In addition, AI models have also been used for prediction of visual outcomes after keratoconus treatment such as intrastromal corneal ring segment implantation.13

(iii) Infective keratitis

Corneal opacification is the fifth leading cause of blindness worldwide with infective keratitis being the major cause.14 Correct identification of the causative infective micro-organism is essential for timely appropriate treatment. Corneal scraping with microscopy, staining and culture is the current gold standard. However, the culture positive rate ranges from 33 to 80%, making culture-directed therapy challenging at times.15 Deep learning models has shown to be able to differentiate between bacterial and fungal keratitis using clinical features, with an accuracy of 90.7%, far superior to clinician prediction rate of 62.8%.16 Deep learning is also able to detect hyphae from IVCM for automated fungal keratitis diagnosis with an accuracy of 98.8%.17

(iv) Corneal transplantation

Across the world, Descemet membrane endothelial keratoplasty (DMEK) is gaining popularity as a safe and effective procedure for the management of corneal endothelial disease. While DMEK has shown fast visual recovery, challenges such as donor tissue preparation, intraoperative donor tissue handling and post operative graft detachment remain.18 

Deep learning models have shown capability in the automated detection of DMEK graft dislocation from ASOCT with sensitivity and specific of 98% and 94% respectively.19 Importantly, as majority of DMEK graft detachment resolves spontaneously, it is difficult to decide on the necessity for intervention in the initial period of graft detachment. 

To aid in this difficult decision making process, Hayashi et al20 developed a CNN model to predict the clinical need for rebubbling, a technique used to address early post op graft detachment, from ASOCT images. Graft detachment involving the central 4mm was used as the criterion for rebubbling and the model showed good performance with AUC of 0.96, sensitivity of 96.7% and specificity of 91.5%.20

(v) Pterygium

Pterygium is the most common degenerative ocular surface disease and at advanced stages can cause irregular astigmatism, corneal scarring and significant visual impairment. In low-resource settings, tertiary ophthalmology care may not be easily accessible, leading to delayed diagnosis and surgical intervention for pterygium.21 

Surgical removal of advanced pterygium has higher complication rates and can have higher rates of recurrence,22 corneal scarring, irregular astigmatism23 and overall poorer prognosis. Fang et al24 developed deep learning algorithms that enable the detection of pterygium from anterior segment photographs from slit lamp and hand-held cameras. 

The deep learning algorithm performed well with AUC of 99.1-99.7% for the detection of any pterygium and AUC of 99.0%-99.5% for the detection of referrable pterygium.24 The automated detection of referrable pterygium and furthermore, from hand-held cameras, will serve as a valuable tool for screening in low-resource setting, allowing for earlier referral and timely intervention.24

Refractive surgery

(i) Ectasia risk prediction

The cumulative prevalence of keratorefractive surgery is rising with the immense popularity of laser in situ keratomileusis (Lasik) and newer techniques such as refractive lenticule extraction (ReLEx)/small incision lenticule extraction (Smile). Despite excellent visual outcomes and safety profile of these techniques, post refractive surgery ectasia is still a feared complication, with an incidence of 0.01% to 0.9%.25 

Pre-operative evaluation is crucial for the detection of patients at risk of post refractive surgery ectasia but remains challenging in eyes, which show little or no alteration in the corneal surface of thickness, such as in forme fruste keratoconus. The machine learning based Pentacam Random Forest Index (PFRI) demonstrated an AUC of 99.2%, 94.2% sensitivity, and 98.8% specificity in the classification of stable and post refractive ectasia eyes based on preoperative Pentacam tomography. 

The PFRI demonstrated superior performance compared to Belin-Ambrosio deviation, which had an AUC of 96.0%, sensitivity of 87.3% and specificity of 97.5%. The inclusion of machine learning in pre-operative evaluation can further enhance the pre-operative assessment process to identify candidates with higher risk of post operative ectasia.

Beyond corneal topography, new advancements in pre-operative assessment now include the use of high resolution swept source OCT, wavefront aberrometry and the evaluation of corneal biomechanics. While corneal tomography can provide corneal geometry information such as curvature changes, pachymetry and elevation, biomechanical evaluation may allow for earlier diagnosis before these structural changes are seen. 

Biomechanical failure is the primary abnormality in ectatic corneas and thus biomechanical evaluation could allow for better risk stratification, post-op ectasia risk prediction in refractive surgery candidates, as well as post operative monitoring of post operative biomechanical changes. 

Herber et al26 developed a machine learning model for the prediction of keratoconus severity based on corneal biomechanical parameters measured by the Corvis ST dynamic Scheimpflug analyser. The AI algorithm was able to detect keratoconus with an overall accuracy of 93%.26 Specifically in the healthy and mild keratoconus group, the model had a specificity of 94% and 90% respectively, which is important in the refractive surgery candidate screening.26

Aside from using specific tests to evaluate post refractive ectasia risk, Yoo et al combined a multitude of pre-operative parameters to develop machine learning models to determine suitability for refractive surgery with an accuracy of 93.4% and AUC of 97.2%.27 The model performed as well as clinical experts in subgroups with high myopia, high astigmatism and thin central corneal thickness.

(ii) AI based nomogram

Over the past decade, Smile has been increasingly popular as a keratorefractive surgery option in view of its good safety, predictability and efficacy. In terms of predictability, studies have shown that 85-98% and 80-100% of patients had within ±0.50 diopter (D) of target refraction at postoperative three and six months.28-30 

Albeit satisfactory, there is still a small proportion of patients with under- or over-correction and a reported 2.1% and 2.9% of patients required an enhancement procedure at one year and two years, respectively.31 Nomograms typically incorporate multiple significant prognostic factors and have been used in refractive surgery to optimise refractive outcomes. 

With nomogram refinements, treatment accuracy in terms of visual acuity and postoperative mean refraction, as well as patient satisfaction, in Lasik and myopic wavefront Lasik, has been shown to be improved.32,33 However, these nomograms were mainly derived from traditional statistics with regression models and adjustments of nomograms depend heavily on surgeons’ personal experiences.

Cui et al34 explored the use of AI for the prediction of Smile nomogram and showed that AI-based nomograms performed as well as surgeons, with 93% of eyes in the machine learning group obtaining refractive outcome within ±0.50D compared to 83% in the surgeon group.34 

In addition, the efficacy index in the machine learning group was significantly higher than in the surgeon group.34 Separately, Park et al35 explored multiple machine learning models including multiple linear regression, decision tree, AdaBoost, XGBoost and multi-layer perceptron in the development of Smile nomograms.35 

They found that AdaBoost achieved the highest performance with root mean squared error (ie the average difference between predicted and actual values) of 0.138 and 0.117 for sphere and cylinder respectively.35 Furthermore, after pre-operative manifest refraction, surgeon feature was found to be the next most important feature for the AI model, which holds potential for the further development of AI based nomogram considering personalised surgeon effect.35


Cataract and cataract surgery

Globally, cataract is the leading cause of blindness in adults above 50 years old with over 15 million individuals suffering from this reversible cause.36 While the global cataract burden is expected to decrease37 since the strong initiatives surrounding Vision 2020, access to tertiary ophthalmology and cataract surgeries is still a challenge in low-income regions with Southern Sub-Saharan Africa having the highest burden of cataract related blindness.37 

Wu et al38 developed a deep learning algorithm, which is able to identify phakic lens from an intraocular lens and detect referrable cataract based on slit lamp photographs with AUC of >99% and > 91% respectively.38 Moving a step further, AI groups39-41 have reported the use of fundus photographs to diagnose, grade and detect referrable cataract, which offers great advantage in low resource settings where a single fundus camera will be able to simultaneously detect referrable cataract while screening for posterior segment disease. 

AI has also been used for intraocular lens power calculation with Sramka et al42 reporting that machine learning models could achieve better IOL calculation resulting in 82.3-82.7% within 0.50D prediction error compared to the Barrett Universal II formula with 72.3-80.6% within 0.50D prediction error.42

After the removal of cataract, posterior capsular opacification (PCO) can lead to significant blurring of vision and is easily treated with YAG laser capsulotomy. Jiang et al43 demonstrated the ability of a deep learning algorithm to predict the progression of PCO requiring YAG laser capsulotomy at the two-year mark, based on slit lamp images, with a high accuracy of 92.2%.43 

The applications of AI has also extended to corneal power evaluation after laser refractive surgery44 and cataract surgery video segmentation for cataract surgery.45

The diagnosis and management of cataract in the paediatric population usually requires an adequately trained ophthalmologist.46 The lack of access to these specialists may result in the underdiagnosis and delay in treatment of pediatric cataract leading to deprivation amblyopia and blindness.46,47 

To address this, Lin et al48 developed a deep learning model capable of diagnosing, grading and providing management guidance for paediatric cataracts based on slit-lamp photographs.48



Glaucoma and iris

Glaucoma is the third leading cause of irreversible blindness worldwide.36 Although the global prevalence of primary angle closure glaucoma (PACG) is less than half of primary open angle glaucoma (POAG),49 it is associated with a three-fold increase in risk of severe bilateral visual impairment compared to POAG.50 

Timely diagnosis of early stages of primary angle closure disease can allow for early intervention in the form of laser peripheral
iridotomy and glaucoma medications or surgery to preserve vision. Gonioscopy is the current clinical gold standard for angle-closure assessment but requires training, is highly subjective and not always reproducible.51 

ASOCT has been rising as a tool for angle structure assessment and showed good sensitivity (ranging from 88.4 % to 98.0%) in the detection of angle closure when compared with gonioscopy.52 AI has been shown to be able to segment anterior segment structures on ASOCT53 as accurately as an experienced ophthalmologist with a dice coefficient (which is a tool measuring the similarity between two sets of data) of 95.7%. 

In addition, deep learning models have enabled the automated detection of angle closure from ASOCT with AUC of 0.90 to 0.96.54-56 Fully automated angle closure detection from ASOCT could allow for angle closure screening and appropriate referral for further assessment. 

Shi et al57 also utilised a deep learning algorithm for the automated classification of angle closure based on ultrasound biomicroscopy with an accuracy of 97.2%.57 Lastly, deep learning models have also been applied to iris photographs and demonstrated the ability to detect different types of iris tumours with an accuracy of 95.7% (Intelligent Eye Tumour Detection System).58


Posterior segment and systemic disease

Interestingly, AI anterior segment imaging has recently shown potential in providing information beyond anterior segment diseases. From Google, Babenko et al59 showed that deep learning models trained with anterior segment photographs can be used to detect vision threatening diabetic retinopathy, diabetic macular edema and poor blood glycemic control.59 

Further works to explore if deep learning algorithms can make further systemic associations with external eye photographs (which may be easily captured by smart phones) holds great potential.


Barriers to clinical translation and future works 

Despite the large amount of AI models developed with excellent performance, many AI technologies fail to find a role in clinical practice. Barriers to clinical translation include the lack of well-defined intended use environments, demand for explainability, medicolegal concerns, privacy concerns, regulatory approval challenges and a lack of supporting health economics analysis. 

AI reporting guidelines such as Consort-AI60 and Spirit-AI61 are useful guides for the development of AI technologies with clinical deployment in mind. In addition, early engagement of regulatory authorities, technology transfer and intellectual property stakeholders will also be key to subsequent clinical adoption and scaling of the AI technology.

In summary, there are myriad AI applications in anterior segment diseases, many of which show excellent diagnostic capabilities. Further work is encouraged to address the barriers to clinical implementation and will be crucial to ensure these AI technologies achieve clinical deployment to reap the full potential benefits that AI can bring to healthcare and mankind. 

  • Zhen Ling Teo is a medical doctor at the Singapore National Eye Centre, Singapore, Singapore Eye Research Institute, Singapore
  • Daniel Shu Wei Ting is director at the Artificial Intelligence Office, SingHealth, head of Artificial Intelligence and Digital Innovation, Singapore Eye Research Institute and affiliated with the Duke-NUS Medical School, Singapore

Disclosures

Dr Daniel SW Ting is co-inventor of a deep learning system for retinal diseases.

References

  1. Gulshan V, Peng L, Coram M, et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA. 2016;316(22):2402-2410. doi:10.1001/jama.2016.17216
  2. Ting DSW, Cheung CYL, Lim G, et al. Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images From Multiethnic Populations With Diabetes. JAMA. 2017;318(22):2211-2223. doi:10.1001/jama.2017.18152
  3. Nurzynska K. Deep Learning as a Tool for Automatic Segmentation of Corneal Endothelium Images. Symmetry. 2018;10(3):60. doi:10.3390/sym10030060
  4. Vigueras-Guillen JP, Andrinopoulou ER, Engel A, et al. Corneal Endothelial Cell Segmentation by Classifier-Driven Merging of Oversegmented Images. IEEE Trans Med Imaging. 2018;37(10):2278-2289. doi:10.1109/TMI.2018.2841910
  5. Qu J, Qin X, Peng R, et al. Assessing abnormal corneal endothelial cells from in vivo confocal microscopy images using a fully automated deep learning system. Eye Vis (Lond). 2023;10(1):20. doi:10.1186/s40662-023-00340-7
  6. Chen X, Graham J, Dabbah MA, Petropoulos IN, Tavakoli M, Malik RA. An Automatic Tool for Quantification of Nerve Fibers in Corneal Confocal Microscopy Images. IEEE Trans Biomed Eng. 2017;64(4):786-794. doi:10.1109/TBME.2016.2573642
  7. Li Q, Zhong Y, Zhang T, et al. Quantitative analysis of corneal nerve fibers in type 2 diabetics with and without diabetic peripheral neuropathy: Comparison of manual and automated assessments. Diabetes Res Clin Pract. 2019;151:33-38. doi:10.1016/j.diabres.2019.03.039
  8. Lavric A, Valentin P. KeratoDetect: Keratoconus Detection Algorithm Using Convolutional Neural Networks. Comput Intell Neurosci. 2019;2019:8162567. doi:10.1155/2019/8162567
  9. Kamiya K, Ayatsuka Y, Kato Y, et al. Keratoconus detection using deep learning of colour-coded maps with anterior segment optical coherence tomography: a diagnostic accuracy study. BMJ Open. 2019;9(9):e031313. doi:10.1136/bmjopen-2019-031313
  10. Abdelmotaal H, Hazarbassanov RM, Salouti R, et al. Keratoconus Detection-based on Dynamic Corneal Deformation Videos Using Deep Learning. Ophthalmol Sci. 2024;4(2):100380. doi:10.1016/j.xops.2023.100380
  11. Kato N, Masumoto H, Tanabe M, et al. Predicting Keratoconus Progression and Need for Corneal Crosslinking Using Deep Learning. J Clin Med. 2021;10(4):844. doi:10.3390/jcm10040844
  12. Kamiya K, Ayatsuka Y, Kato Y, et al. Prediction of keratoconus progression using deep learning of anterior segment optical coherence tomography maps. Ann Transl Med. 2021;9(16):1287. doi:10.21037/atm-21-1772
  13. Valdés-Mas MA, Martín-Guerrero JD, Rupérez MJ, et al. A new approach based on Machine Learning for predicting corneal curvature (K1) and astigmatism in patients with keratoconus after intracorneal ring implantation. Comput Methods Programs Biomed. 2014;116(1):39-47. doi:10.1016/j.cmpb.2014.04.003
  14. Wang EY, Kong X, Wolle M, et al. Global Trends in Blindness and Vision Impairment Resulting from Corneal Opacity 1984-2020: A Meta-analysis. Ophthalmology. 2023;130(8):863-871. doi:10.1016/j.ophtha.2023.03.012
  15. Ung L, Bispo PJM, Shanbhag SS, Gilmore MS, Chodosh J. The persistent dilemma of microbial keratitis: Global burden, diagnosis, and antimicrobial resistance. Surv Ophthalmol. 2019;64(3):255-271. doi:10.1016/j.survophthal.2018.12.003
  16. Saini JS, Jain AK, Kumar S, Vikal S, Pankaj S, Singh S. Neural network approach to classify infective keratitis. Curr Eye Res. 2003;27(2):111-116. doi:10.1076/ceyr.27.2.111.15949
  17. Lv J, Zhang K, Chen Q, et al. Deep learning-based automated diagnosis of fungal keratitis with in vivo confocal microscopy images. Ann Transl Med. 2020;8(11):706. doi:10.21037/atm.2020.03.134
  18. Trindade BLC, Eliazar GC. Descemet membrane endothelial keratoplasty (DMEK): an update on safety, efficacy and patient selection. Clin Ophthalmol. 2019;13:1549-1557. doi:10.2147/OPTH.S178473
  19. Treder M, Lauermann JL, Alnawaiseh M, Eter N. Using Deep Learning in Automated Detection of Graft Detachment in Descemet Membrane Endothelial Keratoplasty: A Pilot Study. Cornea. 2019;38(2):157-161. doi:10.1097/ICO.0000000000001776
  20. Hayashi T, Tabuchi H, Masumoto H, et al. A Deep Learning Approach in Rebubbling After Descemet’s Membrane Endothelial Keratoplasty. Eye & Contact Lens: Science & Clinical Practice. 2020;46(2):121-126. doi:10.1097/ICL.0000000000000634
  21. Liu L, Wu J, Geng J, Yuan Z, Huang D. Geographical prevalence and risk factors for pterygium: a systematic review and meta-analysis. BMJ Open. 2013;3(11):e003787. doi:10.1136/bmjopen-2013-003787
  22. Mahar PS, Manzar N. Pterygium recurrence related to its size and corneal involvement. J Coll Physicians Surg Pak. 2013;23(2):120-123.
  23. Tomidokoro A, Oshika T, Amano S, Eguchi K, Eguchi S. Quantitative analysis of regular and irregular astigmatism induced by pterygium. Cornea. 1999;18(4):412-415. doi:10.1097/00003226-199907000-00004
  24. Fang X, Deshmukh M, Chee ML, et al. Deep learning algorithms for automatic detection of pterygium using anterior segment photographs from slit-lamp and hand-held cameras. Br J Ophthalmol. 2022;106(12):1642-1647. doi:10.1136/bjophthalmol-2021-318866
  25. Moshirfar M, Tukan AN, Bundogji N, et al. Ectasia After Corneal Refractive Surgery: A Systematic Review. Ophthalmol Ther. 2021;10(4):753-776. doi:10.1007/s40123-021-00383-w
  26. Herber R, Pillunat LE, Raiskup F. Development of a classification system based on corneal biomechanical properties using artificial intelligence predicting keratoconus severity. Eye Vis (Lond). 2021;8(1):21. doi:10.1186/s40662-021-00244-4
  27. Yoo TK, Ryu IH, Lee G, et al. Adopting machine learning to automatically identify candidate patients for corneal refractive surgery. NPJ Digit Med. 2019;2:59. doi:10.1038/s41746-019-0135-8
  28. Sekundo W, Gertnere J, Bertelmann T, Solomatin I. One-year refractive results, contrast sensitivity, high-order aberrations and complications after myopic small-incision lenticule extraction (ReLEx SMILE). Graefes Arch Clin Exp Ophthalmol. 2014;252(5):837-843. doi:10.1007/s00417-014-2608-4
  29. Kamiya K, Shimizu K, Igarashi A, Kobashi H. Visual and refractive outcomes of femtosecond lenticule extraction and small-incision lenticule extraction for myopia. Am J Ophthalmol. 2014;157(1):128-134.e2. doi:10.1016/j.ajo.2013.08.011
  30. Sekundo W, Kunert KS, Blum M. Small incision corneal refractive surgery using the small incision lenticule extraction (SMILE) procedure for the correction of myopia and myopic astigmatism: results of a 6 month prospective study. Br J Ophthalmol. 2011;95(3):335-339. doi:10.1136/bjo.2009.174284
  31. Liu YC, Rosman M, Mehta JS. Enhancement after Small-Incision Lenticule Extraction: Incidence, Risk Factors, and Outcomes. Ophthalmology. 2017;124(6):813-821. doi:10.1016/j.ophtha.2017.01.053
  32. Lapid-Gortzak R, van der Linden JW, van der Meulen IJE, Nieuwendaal CP. Advanced personalized nomogram for myopic laser surgery: first 100 eyes. J Cataract Refract Surg. 2008;34(11):1881-1885. doi:10.1016/j.jcrs.2008.06.041
  33. Liyanage SE, Allan BD. Multiple regression analysis in myopic wavefront laser in situ keratomileusis nomogram development. J Cataract Refract Surg. 2012;38(7):1232-1239. doi:10.1016/j.jcrs.2012.02.043
  34. Cui T, Wang Y, Ji S, et al. Applying Machine Learning Techniques in Nomogram Prediction and Analysis for SMILE Treatment. American Journal of Ophthalmology. 2020;210:71-77. doi:10.1016/j.ajo.2019.10.015
  35. Park S, Kim H, Kim L, et al. Artificial intelligence-based nomogram for small-incision lenticule extraction. Biomed Eng Online. 2021;20(1):38. doi:10.1186/s12938-021-00867-7
  36. GBD 2019 Blindness and Vision Impairment Collaborators, Vision Loss Expert Group of the Global Burden of Disease Study. Causes of blindness and vision impairment in 2020 and trends over 30 years, and prevalence of avoidable blindness in relation to VISION 2020: the Right to Sight: an analysis for the Global Burden of Disease Study. Lancet Glob Health. 2021;9(2):e144-e160. doi:10.1016/S2214-109X(20)30489-7
  37. Fang R, Yu YF, Li EJ, et al. Global, regional, national burden and gender disparity of cataract: findings from the global burden of disease study 2019. BMC Public Health. 2022;22(1):2068. doi:10.1186/s12889-022-14491-0
  38. Wu X, Huang Y, Liu Z, et al. Universal artificial intelligence platform for collaborative management of cataracts. Br J Ophthalmol. 2019;103(11):1553-1560. doi:10.1136/bjophthalmol-2019-314729
  39. Tham YC, Goh JHL, Anees A, et al. Detecting visually significant cataract using retinal photograph-based deep learning. Nat Aging. 2022;2(3):264-271. doi:10.1038/s43587-022-00171-6
  40. Xie H, Li Z, Wu C, et al. Deep learning for detecting visually impaired cataracts using fundus images. Front Cell Dev Biol. 2023;11:1197239. doi:10.3389/fcell.2023.1197239
  41. Xu X, Zhang L, Li J, Guan Y, Zhang L. A Hybrid Global-Local Representation CNN Model for Automatic Cataract Grading. IEEE J Biomed Health Inform. 2020;24(2):556-567. doi:10.1109/JBHI.2019.2914690
  42. Sramka M, Slovak M, Tuckova J, Stodulka P. Improving clinical refractive results of cataract surgery by machine learning. PeerJ. 2019;7:e7202. doi:10.7717/peerj.7202
  43. Jiang J, Liu X, Liu L, et al. Predicting the progression of ophthalmic disease based on slit-lamp images using a deep temporal sequence network. PLoS One. 2018;13(7):e0201142. doi:10.1371/journal.pone.0201142
  44. Koprowski R, Lanza M, Irregolare C. Corneal power evaluation after myopic corneal refractive surgery using artificial neural networks. Biomed Eng Online. 2016;15(1):121. doi:10.1186/s12938-016-0243-5
  45. Yu F, Silva Croso G, Kim TS, et al. Assessment of Automated Identification of Phases in Videos of Cataract Surgery Using Machine Learning and Deep Learning Techniques. JAMA Netw Open. 2019;2(4):e191860. doi:10.1001/jamanetworkopen.2019.1860
  46. Lenhart PD, Courtright P, Wilson ME, et al. Global challenges in the management of congenital cataract: proceedings of the 4th International Congenital Cataract Symposium held on March 7, 2014, New York, New York. J AAPOS. 2015;19(2):e1-8. doi:10.1016/j.jaapos.2015.01.013
  47. Gilbert C, Foster A. Childhood blindness in the context of VISION 2020--the right to sight. Bull World Health Organ. 2001;79(3):227-232.
  48. Lin H, Li R, Liu Z, et al. Diagnostic Efficacy and Therapeutic Decision-making Capacity of an Artificial Intelligence Platform for Childhood Cataracts in Eye Clinics: A Multicentre Randomized Controlled Trial. EClinicalMedicine. 2019;9:52-59. doi:10.1016/j.eclinm.2019.03.001
  49. Tham YC, Li X, Wong TY, Quigley HA, Aung T, Cheng CY. Global prevalence of glaucoma and projections of glaucoma burden through 2040: a systematic review and meta-analysis. Ophthalmology. 2014;121(11):2081-2090. doi:10.1016/j.ophtha.2014.05.013
  50. Friedman DS, Foster PJ, Aung T, He M. Angle closure and angle-closure glaucoma: what we are doing now and what we will be doing in the future. Clin Exp Ophthalmol. 2012;40(4):381-387. doi:10.1111/j.1442-9071.2012.02774.x
  51. Porporato N, Baskaran M, Aung T. Role of anterior segment optical coherence tomography in angle-closure disease: a review. Clin Exp Ophthalmol. 2018;46(2):147-157. doi:10.1111/ceo.13120
  52. Nolan WP, See JL, Chew PTK, et al. Detection of primary angle closure using anterior segment optical coherence tomography in Asian eyes. Ophthalmology. 2007;114(1):33-39. doi:10.1016/j.ophtha.2006.05.073
  53. Pham TH, Devalla SK, Ang A, et al. Deep learning algorithms to isolate and quantify the structures of the anterior segment in optical coherence tomography images. Br J Ophthalmol. 2021;105(9):1231-1237. doi:10.1136/bjophthalmol-2019-315723
  54. Xu BY, Chiang M, Chaudhary S, Kulkarni S, Pardeshi AA, Varma R. Deep Learning Classifiers for Automated Detection of Gonioscopic Angle Closure Based on Anterior Segment OCT Images. Am J Ophthalmol. 2019;208:273-280. doi:10.1016/j.ajo.2019.08.004
  55. Fu H, Baskaran M, Xu Y, et al. A Deep Learning System for Automated Angle-Closure Detection in Anterior Segment Optical Coherence Tomography Images. Am J Ophthalmol. 2019;203:37-45. doi:10.1016/j.ajo.2019.02.028
  56. Fu H, Xu Y, Lin S, et al. Angle-Closure Detection in Anterior Segment OCT Based on Multilevel Deep Network. IEEE Trans Cybern. 2020;50(7):3358-3366. doi:10.1109/TCYB.2019.2897162
  57. Shi G, Jiang Z, Deng G, et al. Automatic Classification of Anterior Chamber Angle Using Ultrasound Biomicroscopy and Deep Learning. Transl Vis Sci Technol. 2019;8(4):25. doi:10.1167/tvst.8.4.25
  58. Dimililer K, Ever YK, Ratemi H. Intelligent eye Tumour Detection System. Procedia Computer Science. 2016;102:325-332. doi:10.1016/j.procs.2016.09.408
  59. Babenko B, Mitani A, Traynis I, et al. Detection of signs of disease in external photographs of the eyes via deep learning. Nat Biomed Eng. 2022;6(12):1370-1383. doi:10.1038/s41551-022-00867-5
  60. Liu X, Cruz Rivera S, Moher D, Calvert MJ, Denniston AK, SPIRIT-AI and CONSORT-AI Working Group. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Nat Med. 2020;26(9):1364-1374. doi:10.1038/s41591-020-1034-x
  61. Cruz Rivera S, Liu X, Chan AW, Denniston AK, Calvert MJ, SPIRIT-AI and CONSORT-AI Working Group. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Lancet Digit Health. 2020;2(10):e549-e560. doi:10.1016/S2589-7500(20)30219-3
  62. https://www.ibm.com/topics/random-forest. Date accessed, Dec 5th 2023
  63. https://www.ibm.com/topics/convolutional-neural-networks.  Date accessed, Dec 5, 2023
  64. de Hond AA, Steyerberg EW, van Calster B. Interpreting area under the receiver operating characteristic curve. The Lancet Digital Health. 2022 Dec 1;4(12):e853-5.
  65. Mandrekar JN. Receiver operating characteristic curve in diagnostic test assessment. Journal of Thoracic Oncology. 2010 Sep 1;5(9):1315-6.