Features

A basic introduction to artificial intelligence for eye care professionals

Australia based researchers and academics, Md Mahmudul Hasan, Erik Meijering, Arcot Sowmya, Michael Kalloniatis and Jack Phu, offer an introduction to the world of artificial intelligence. The authors describe and define key terminologies and processes, helping ECPs understand this new and rapidly changing field. The article forms the first of a short series looking at AI and its growing role within eye care

Figure 1: Relationship between artificial intelligence (AI), machine learning (ML) and deep learning (DL)

Imagine a normal day in your life – you are scrolling through Netflix, and just as you think about a movie you want to watch next, the movie magically appears on your screen. What a wonder! Now you are interacting with a virtual assistant, ‘Siri’, on your smartphone, which understands your spoken language, answers your questions, and even provides suggestions.

Ever wondered how this happens? Behind the scenes, artificial intelligence (AI) powers these capabilities, learning from the vast amounts of data it processes and becoming more adept at understanding your preferences and needs over time.

Now, let us delve into the practice of eye care in relation to recent advancements in AI. Newly invented AI screening systems offer convenient access at the point of care and may reduce operating costs by automatically interpreting images and referring to an eye specialist only when necessary.

Are you aware that the US Food and Drug Administration (FDA) has approved two fully autonomous AI systems for detecting diabetic retinopathy (DR) without human oversight?1 The first ever FDA cleared autonomous AI is IDx-DR system, which is a diagnostic system that autonomously diagnoses patients with diabetic retinopathy (including macular oedema) and offers 87.2% sensitivity for the diagnosis of conditions like mild DR.2

Another FDA approved software for medical devices is the ‘EyeArt system’, which provides point-of-care screening and shows similar performance.3 The EyeArt system can also detect eyes with vision-threatening diabetic retinopathy (vtDR), and requires fewer patients to be dilated (12.6%) than the IDx-DR (23.6%) to generate disease detection results.

When the FDA approves fully autonomous software for medical devices in detecting diseases such as diabetic retinopathy (DR), it marks a significant milestone in modern eye care practice. Therefore, it is crucial for eye care professionals (ECPs) to familiarise themselves with the ‘ABC’ of AI.

In this article we start with a brief introduction to AI, the classification of AI, and AI-based analysis and related terminologies, especially focusing on the application of AI to retinal disease diagnosis and ending with a discussion of future collaboration of human and machine intelligence.

 

What is artificial intelligence?

The term ‘Artificial Intelligence’ was coined by John McCarthy during a 1956 conference and encompasses the capacity of machines to mimic human intelligence. This rapidly evolving field, integral to computer and data science, has various facets.4 AI has been developed for tasks like learning from experience, recognising patterns from images, and solving problems.

AI-based systems use algorithms and data to improve their performance over time, allowing them to adapt to different situations based on the training examples. Within the area of artificial intelligence, there are diverse approaches to data-driven pattern analysis, broadly classified into machine learning (ML) and deep learning (DL) (figure 1- see top).5

  • Machine learning is like teaching a computer to do things automatically, without any human intervention once it is trained with an appropriate dataset.5 It uses smart algorithms that learn from examples of handcrafted features that you provide them.
  • Deep learning is a special type of machine learning that does not require handcrafted features (ie features extracted from images by human experts, or using a third-party software), rather it involves complex layers of ‘artificial neurons’, which can automatically extract features from the input data.5



AI applications in retinal image analysis

AI has revolutionised medical image analysis, offering innovative solutions for diagnosing and understanding abnormal physical conditions. In the field of ophthalmology, the potential of AI is significant, particularly in the context of diagnosing and monitoring ocular diseases, where image analysis plays a crucial role.6

Utilising AI approaches, conditions such as diabetic retinopathy (DR),7 glaucoma,8-11 and age-related macular degeneration (AMD)7 can be accurately identified from fundus images.12 Furthermore, AI has the potential to assist ECPs in creating personalised care plans for patients throughout the course of treatment and guiding clinical decisions.6

Based on a recent review,13 deep learning applied to retinal imaging has been the sixth-most top area of publication among different biomedical application areas (figure 2).

Figure 2: Number of publications on deep learning in different biomedical application areas, categorised by imaging modality (extracted from Meijering13)

Let us delve into the essential components of AI, starting with the fundamental subsets based on methodologies, such as machine learning and deep learning. Also, within the realm of AI, we encounter diverse paradigms, including supervised, unsupervised, and semi-supervised learning, each playing a distinct role in learning process, which we will also discuss.

Furthermore, to navigate the AI landscape effectively, it is important to get familiarised with key terms integral to the understanding and application of AI, which we also discuss in the end.

 

Traditional machine learning vs deep learning

The primary distinction between conventional machine learning and deep learning methods lies in the process of feature extraction from input data and its presentation to the algorithm for ‘pattern recognition’ (see figure 3). The pattern recognition process is like teaching a computer to recognise specific arrangements or trends in the input data to make sense of it.

Figure 3: A simplified diagram of AI-based techniques for pattern recognition

In traditional machine learning, multiple human-defined parameters or features are extracted from the input data and provided to the machine learning algorithm for this purpose. For example, clinical features such as age, gender and ethnicity, and image-based features, such as retinal thickness from optical coherence tomography (OCT) or OCT-angiography (OCTA) data and the presence of lesions in fundus images, could be extracted using external software and supplied to the machine for decision making.

Conversely, in deep learning, the input data are directly fed to the algorithm, which employs its inherent mechanisms to autonomously extract features necessary for executing the assigned task (figure 3).14 In the above example, the OCT, OCTA or fundus images could be directly used by the machine for the decision-making process.

The comprehensive methodology of deep learning streamlines the laborious process of feature design and extraction typically involved in machine learning, where tasks like extracting the retinal nerve fibre layer (RNFL) and other thicknesses from images are necessary. When analysing retinal scans for abnormalities using convolutional deep networks,15 its iterative workflow, characterised by multiple layers (hence the term deep learning), continues, with each filter generating an output score that serves as the input for the next layer.

The outcome may include a diagnostic result (abnormal or normal) based on the abnormalities present in the retinal scans (figure 4). Nevertheless, the intricate features internally extracted by deep learning algorithms can be challenging to comprehend. Particularly in the deeper layers, the high-level features extracted can be abstract and challenging for human operators to interpret, leading to the characterisation of deep learning as a ‘black-box’ approach.16

Figure 4: An example of disease diagnosis with an AI model using traditional ML (top branch) and DL (bottom branch) for the classification task

Supervised, Semi-supervised and Unsupervised Learning

Machine and deep learning algorithms can be supervised, semi-supervised or unsupervised by nature, based on the nature of the input data (also called samples) and the way it is used in the training process to predict the target variable (or dependent variable).17

  • Supervised learning guides the machine by providing a dataset with labelled information (inputs and target), like images (inputs) with preassigned diagnoses (target) by experts.
  • Unsupervised learning deals with unlabelled data, and the algorithms employed in this context are more sophisticated, as they operate on unclassified data and the machine must learn labels or categories on its own from the given data.
  • Semi-supervised learning combines both types of learning, using a combination of labelled and unlabelled datasets for a balanced approach to train the machine learning algorithm.17

Some real life examples of supervised, unsupervised and semi-supervised approaches applied to ophthalmology are given below.

 

Supervised learning

Consider a scenario where the goal is to teach a machine to identify diabetic retinopathy based on colour fundus photographs. As the instructor, you would present the machine with a dataset comprising colour fundus photographs of both normal and diabetic retinopathy eyes, each photograph labelled with its corresponding condition.

The machine then learns a model to distinguish between the two. To evaluate the model, you test it on a set of new photographs and ask it to identify those with diabetic retinopathy. This scenario exemplifies supervised learning, as the machine is trained on a labelled dataset.

In supervised learning, the machine is trained with a preassigned set of diagnoses, known as the ‘ground truth’. Throughout the learning process in medical-assisted AI systems, domain experts or clinicians play a crucial role by providing objective guidance through ground truth samples to the machine. Their expertise ensures that the machine comprehensively learns to distinguish between normal and diabetic retinopathy cases in colour fundus photographs.

 

Unsupervised learning

Now, imagine a situation where you have a collection of colour fundus photographs, but this time, you do not provide explicit labels indicating whether the eyes have diabetic retinopathy or are normal. Instead, you ask the machine to discover patterns or structures within the samples autonomously – without relying on predefined diagnoses or guidance.

During training, the machine might start grouping the images based on shared features, without being explicitly told which colour fundus photographs belong to which group.

 

Semi-supervised learning

Imagine a scenario where you are teaching a machine to identify diabetic retinopathy using a hybrid approach. Initially, you provide the machine with a subset of colour fundus photographs, where some images are labelled with the conditions (diabetic retinopathy or normal), while others are left unlabelled.

As you can see, this hybrid approach integrates both supervised and unsupervised learning, and the machine autonomously explores the dataset, discovering patterns within the unlabelled images without predefined guidance. Simultaneously, it leverages the labelled images to understand the specified conditions, combining the advantages of autonomous learning with the precision of labelled information.

During testing, the machine is then tasked with categorising new, unseen images based on the learned patterns, showcasing the benefits of this semi-supervised methodology. This semi-supervised approach proves particularly useful in situations where obtaining a large amount of labelled data is resource-intensive or impractical, allowing for efficient learning with a blend of guidance and autonomy.

To summarise the above examples of learning processes, supervised learning uses labelled data with preassigned diagnoses, unsupervised learning operates on unlabelled data allowing the machine to explore data autonomously, and semi-supervised learning combines both approaches – blending guidance with autonomy for efficient learning. Each of the methods serves different scenarios and usefulness in medical image diagnosis based on the availability of labelled data.

 

Important Terms

Now we will introduce common AI terms relating to retinal image processing and analysis, and their usefulness in real-world application.

 

Image preprocessing

Image preprocessing is the process of preparing the image data before supplying them to the AI algorithm. It may include data cleaning, artefact removal, resizing, and contrast enhancement.18 Image preprocessing is useful for medical images in improving the quality, consistency, and interpretability of medical images before supplying them to a learning algorithm that learns and works as a trained AI model.

 

Feature extraction

The term ‘feature’ refers to different ‘attributes’ or ‘independent variables’ describing a specific pattern in the image data. Features carry key information from the images, and can be handcrafted features, extracted from a third-party application (using other software), or automatically learned using deep learning.18

For example, RNFL thickness and ganglion cell thickness can be
considered as OCT based thickness features, while the cup-to-disc-ratio after delineation of the optic disc and cup may be considered as another measured feature from the OCT image. Feature extraction is vital in medical image analysis as it enables the creation of more efficient, robust and interpretable AI models, improving their diagnostic and predictive capabilities.

 

Training an AI algorithm

Training refers to the process of teaching the AI algorithm which specific patterns correspond to which specific conditions, and the outcome of training is an AI model. It is an important step towards learning from a diverse dataset and facilitating its capacity to make accurate clinical decisions in medical image-based diagnosis.

 

Testing an AI model

When an AI algorithm is trained with a well-defined dataset, it is ready to perform the specific task (eg disease diagnosis) on new and unseen data. Testing on available dataset is always necessary, to check whether a useful model has been learned at least of that dataset. Testing on external datasets is used to assess whether the model is generalised. This evaluation phase is crucial to validate the AI model’s effectiveness in real-world scenarios, especially for disease diagnosis in medical domain.

 

Transfer learning

Transfer learning involves leveraging knowledge gained from one dataset to improve performance on another related dataset for a specific task. For example, most deep learning models are trained on the ImageNet dataset, which is a large dataset consisting of millions of images containing thousands of different classes (eg objects, animals, fruits and vegetables, vehicles and nature).

AI algorithm trained with the ImageNet dataset can be utilised in medical image-based diagnosis. It is done by modifying a part of the algorithm and incorporating ‘pre-trained weights’, which is a set of knowledge gained from recognising patterns from the ImageNet dataset.19 This process is analogous to the transfer of knowledge of how to ride a bicycle when learning to ride a motorcycle.

In medical AI, transfer learning allows models trained on one type of medical imaging data (eg large publicly available dataset) to be adapted and fine-tuned for a different modality or private dataset, which helps optimise efficiency and accuracy. With the strength of this knowledge transfer, transfer learning has been proven a useful tool in medical image analysis (eg
glaucoma diagnosis), especially for improved medical diagnosis.14

 

Interpretability in AI

The opaque characteristics of deep learning techniques pose a hurdle for healthcare professionals to fully rely on them and integrate them directly into the healthcare system. This is referred to as the ‘black-box’, which signifies the inherent complexity of AI algorithms, particularly deep neural networks.

In such AI-based systems, the internal decision-making mechanisms can be challenging for humans to interpret or understand, where you know the inputs and outputs, but the intermediate steps and reasoning are not easily accessible. Explainable artificial intelligence (XAI) aims to shed light on this ‘black-box’ by providing methods to make the decision-making process of AI systems more transparent and understandable for users, who may not have specialised technical knowledge.20,21

Despite ongoing efforts, the majority of works in XAI are still in development, and limited algorithms are currently available to decipher the existing black-box methods.22 XAI is important for clinicians to understand and validate AI-driven decisions, promoting transparency and encouraging collaboration between human experts and machine intelligence in medical image computing.

 

Classification tasks

In the context of retinal image analysis, classification using AI involves the identification and categorisation of data based on specific criteria (eg disease severity level classification or staging). The number of classes depends on the specific task that the machine is designed to solve, for example, image quality or grading, disease severity level or identifying normal and abnormal conditions.23

Machine learning models, especially supervised learning algorithms, play a crucial role in training to recognise patterns corresponding to various eye conditions such as diabetic retinopathy, AMD or glaucoma. Both traditional machine learning and deep learning methods are widely used for classification purposes.24

 

Segmentation tasks

In OCT imaging, various retinal layers are quantified. In order to achieve this quantification, segmentation of the various OCT scans is required to measure the thickness of retinal layers such as RNFL, ganglion cell-inner plexiform layer or macular layer thickness.25 The goal of image segmentation is to group similar pixels (units of an image) based on certain characteristics or features, and thus identify and separate different objects or areas within an image.

In the case of fundus image analysis, segmentation models are widely used for the localisation of optic disc/cup and measure the optic cup-to-disc ratio – which is an important feature for glaucoma diagnosis. Also, segmentation models are widely used for the extraction of retinal blood vessel maps and lesions, which are important for diseases like diabetic retinopathy. Deep learning models, particularly convolutional neural networks (CNNs), excel in this area by automatically identifying and outlining these structures with remarkable accuracy (eg figure 5).

Figure 5: An AI model (R2 U-Net) produced segmentation results for the widefield dataset, displayed overlaid on images. On the left is the entire image, with a grey dashed rectangle indicating the zoomed area featured in the two rightmost images. The bright green region represents GCL-IPL, while the dark green region corresponds to the retina (below IPL), choroid and sclera. Dotted lines align with the positions of the ground truth boundaries. Extracted from Kugelman et al25



Myths, Rumours and Challenges: Could AI Replace Eye Care Practitioners?

With the recent invention of large language models (LLMs) such as ChatGPT and BardAI, there is a prevailing myth that AI could entirely replace healthcare practitioners.

Given the available computation resources and the development of novel algorithms with publicly available datasets, and demonstrations by the researchers of the fact that machine intelligence can surpass human-based diagnosis, the question arises: should we fear AI? The simple answer is ‘no’; however, it is imperative for humans to ensure the ethical use of AI.

Especially, when it comes to automated medical diagnosis, annotated, labelled and clean data provided by clinicians is crucial for machines to train and test accurate and trustworthy diagnosis results. Still, there are several major challenges to address concerning AI applications for ophthalmology.

The most important challenges involve improving reporting guidelines, ensuring security in multicentred datasets, and navigating ethical considerations in relation to AI.26 The future role of healthcare specialists working in this area may involve interpreting AI-generated results and contributing to the development and training of AI algorithms to ensure optimal patient benefits.

AI plays a crucial role in streamlining data-related tasks by automating repetitive data analysis and facilitating the merging of datasets and categorisation based on predefined rules. AI may be considered as a helping hand for those tasks, which will boost the productivity of clinicians and help in managing the increasing volume of imaging data.

So, the bottom line is... rather than fearing AI, ECPs, especially primary and secondary care optometrists, should recognise AI as a valuable tool.17 

  • Md Mahmudul Hasan is a PhD student in the School of Computer Science and Engineering (CSE), University of New South Wales (UNSW), Sydney, Australia. He is supervised by Professor Erik Meijering, Professor Arcot Sowmya and Professor Michael Kalloniatis. His research project is funded by a UNSW Tuition Fee Scholarship (TFS). Erik Meijering is Professor of Biomedical Image Computing, School of Computer Science and Engineering, University of New South Wales (UNSW). Arcot Sowmya is Professor and Head of School of Computer Science and Engineering, University of New South Wales (UNSW). Michael Kalloniatis is Professor at the School of Medicine (Optometry), Deakin University, Waurn Ponds, Victoria, Australia, and adjunct Professor, School of Optometry and Vision Science, University of New South Wales (UNSW), Australia. Jack Phu is Lecturer, School of Optometry and Vision Science, University of New South Wales (UNSW) and Research Fellow, School of Medicine (Optometry), Deakin University, Waurn Ponds, Victoria, Australia.
    For further information, readers may be interested in reading an associated peer reviewed publication: Hasan, MM, Phu, J, Sowmya, A, Meijering, E, & Kalloniatis, M. (2023). Artificial intelligence in the diagnosis of glaucoma and neurodegenerative diseases. Clinical and Experimental Optometry, 1-17.

 

References

  1. Lim JI, Regillo CD, Sadda SR et al. Artificial intelligence detection of diabetic retinopathy: subgroup comparison of the EyeArt system with ophthalmologists’ dilated examinations. Ophthalmology Science 2023; 3: 100228.
  2. Abramoff MD, Lavin PT, Birch M et al. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. Npj Digital Medicine 2018; 1.
  3. Bhaskaranand M, Ramachandra C, Bhat S et al. The value of automated diabetic retinopathy screening with the EyeArt system: a study of more than 100,000 consecutive encounters from people with diabetes. Diabetes technology & therapeutics 2019; 21: 635-643.
  4. Chakraborty U, Banerjee A, Saha JK et al. Artificial intelligence and the fourth industrial revolution: CRC Press, 2022.
  5. Jakhar D, Kaur I. Artificial intelligence, machine learning and deep learning: definitions and differences. Clinical and Experimental Dermatology 2020; 45: 131-132.
  6. Li Z, Wang L, Wu X et al. Artificial intelligence in ophthalmology: The path to the real-world clinic. Cell Reports Medicine 2023.
  7. Ting DSW, Pasquale LR, Peng L et al. Artificial intelligence and deep learning in ophthalmology. Br J Ophthalmol 2019; 103: 167-175.
  8. Al-Aswad LA, Ramachandran R, Schuman JS et al. Artificial Intelligence for Glaucoma: Creating and Implementing Artificial Intelligence for Disease Detection and Progression. Ophthalmol Glaucoma 2022; 5: e16-e25.
  9. Akter N, Phu J, Perry S et al. Analysis of OCT Images to Optimize Glaucoma Diagnosis. Imaging Systems and Applications; 2019: Optica Publishing Group.
  10. Liu S, Graham SL, Schulz A et al. A Deep Learning-Based Algorithm Identifies Glaucomatous Discs Using Monoscopic Fundus Photographs. Ophthalmology Glaucoma 2018; 1: 15-22.
  11. Gheisari S, Shariflou S, Phu J et al. A combined convolutional and recurrent neural network for enhanced glaucoma detection. Scientific Reports 2021; 11: 1-11.
  12. Son KY, Ko J, Kim E et al. Deep learning-based cataract detection and grading from slit-lamp and retro-illumination photographs: Model development and validation study. Ophthalmology Science 2022; 2: 100147.
  13. Meijering E. A bird’s-eye view of deep learning in bioimage analysis. Computational and Structural Biotechnology Journal 2020; 18: 2312.
  14. Hasan MM, Phu J, Sowmya A et al. Artificial Intelligence in the Diagnosis of Glaucoma and Neurodegenerative Diseases. Clinical and Experimental Optometry 2023: 1-15. https://doi.org/10.1080/08164622.2023.2235346
  15. Mintz Y, Brodie R. Introduction to artificial intelligence in medicine. Minimally Invasive Therapy & Allied Technologies 2019; 28: 73-81.
  16. Charng J, Alam K, Swartz G et al. Deep learning: applications in retinal and optic nerve diseases. Clinical and Experimental Optometry 2022: 1-10.
  17. Six O, Quantib B. The ultimate guide to AI in radiology. Artificial Intelligence in Healthcare Solutions 2019.
  18. Lu W, Tong Y, Yu Y et al. Applications of Artificial Intelligence in Ophthalmology: General Overview. Journal of Ophthalmology 2018; 2018.
  19. Ruamviboonsuk P, Kaothanthong N, Ruamviboonsuk V et al. Transfer Learning for Artificial Intelligence in Ophthalmology. In: Digital Eye Care and Teleophthalmology: A Practical Guide to Applications: Springer, 2023. p 181-198.
  20. Adadi A, Berrada M. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 2018; 6: 52138-52160.
  21. Hagras H. Toward human-understandable, explainable AI. Computer 2018; 51: 28-36.
  22. Samek W, Montavon G, Vedaldi A et al. Explainable AI: interpreting, explaining and visualizing deep learning: Springer Nature, 2019.
  23. Al Mouiee D, Meijering E, Kalloniatis M et al. Classifying retinal degeneration in histological sections using deep learning. Translational Vision Science & Technology 2021; 10: 9-9.
  24. Schmidt-Erfurth U, Sadeghipour A, Gerendas BS et al. Artificial intelligence in retina. Progress in Retinal and Eye Research 2018; 67: 1-29.
  25. Kugelman J, Allman J, Read SA et al. A comparison of deep learning U-Net architectures for posterior segment OCT retinal layer segmentation. Scientific reports 2022; 12: 14888.
  26. Jin K, Ye J. Artificial intelligence and deep learning in ophthalmology: Current status and future perspectives. Advances in Ophthalmology Practice and Research 2022: 100078.