Artificial Intelligence (AI) recently attracted the attention of Prime Minister Teresa May during a roundtable on how the technology would interact with healthcare in the near future. The PM’s Indian counterpart, Narendra Modi, was also in attendance at the event to develop a proposed AI collaboration between the UK and India.
Showing its Pegasus-disc clinical decision-making assistant at the event was UK-based Visulytix. The company’s CEO Jay Lakhani took time to demonstrate to the two leaders how the system worked and to explain the benefits of AI in health care.
But is the technology accessible to optometrists on the high street or it the reserve of multiples with access to ‘big data’ and hospital screening programmes? And does the notion of AI putting people out of the job have any resonance?
Lakhani certainly thinks the technology is something that will benefit practitioners throughout the profession. ‘We’ve had a number of optometrists use the product – it’s accessible to anybody. It’s a case of dragging images from a device into a web browser. The images are processed via the cloud and a diagnosis is returned instantly,’ he says.
The Visulytix Pegasus system looks for multiple pathologies including glaucoma, diabetic retinopathy and age-related macular degeneration using images from OCTs and fundus cameras. It uses deep learning, a type of AI that effectively looks through thousands of images and learns what a pathologies and abnormalities looks like.
‘What we’re finding at the moment is that optometrists, especially in larger chains, have the required equipment but they don’t necessarily have the know-how to really properly diagnose patients from the images they collect,’ said Lakhani.
‘The idea is that they can show they are able to pick up pathologies early and at the same time, we’re reducing the false positives or false negatives of those conditions. Ultimately the idea is that they become a kind of primary GP for eyecare.
‘If you listen to Doug Perkins (Specsavers co-founder), he says the same thing. He doesn’t think optometrists should be limited to the role they currently play in the NHS. It should be enlarged to cover more to do with the eye, especially outside of the current refractive area.’
Resistance
As with the introduction of many new technologies in the workplace, AI has been met with a certain degree of apprehension within optometry over the potential to have a detrimental impact on scope of practice.
Lakhani was certain optometrists have nothing to fear: ‘From a legal perspective and from a patient requirement perspective, there’s going to be no room to replace anybody. Even if it was something AI companies wanted, it would never happen because the patient ultimately wants the contact time with a human, with a specialist.’
According to Lakhani, Pegasus upskills practice staff instead – whether it’s a technician using an OCT or an optometrist reviewing an image to spot signs of pathologies they may have missed.
‘If you have a relatively new optometrist, say with five years’ experience in practice, we’re giving them access to the equivalent experience of a 40-plus year ophthalmologist, helping improve their accuracy. We’re not talking about replacement, we’re talking about improved productivity from an existing member of staff.’
Opportunities
At the same time as getting more from staff, AI offers the chance to get more from practice equipment.
Interpretation of OCT images has long been a subject of discussion within optometry. Indeed, the use of the technology and its practitioner shortcomings were highlighted in the Foresight report in 2015. It was estimated in the report that 15-20% of practices had an OCT. The report said OCT was a powerful example of the industry creating its own momentum while at the same time positioning itself for wider opportunities in NHS community services in the future.
But the report also highlighted a ‘lack of optometric skillsets’ in the interpretation of OCT images. ‘Software algorithms within the OCTs themselves may indicate disease, while nerve fibre analysis will automatically track progression and compare measurements, but OCT currently requires the clinician to interpret results and decide on appropriate action. This is likely to remain the case for some years ahead,’ said the report.
Lakhani agreed: ‘Equipment like an OCT doesn’t come with a huge amount of training, so in theory, ECPs can spot the diseases that an OCT can spot, but haven’t necessarily been trained to interpret the results.’
With a deep learning system such as Pegasus helping realise the potential of OCT, there are also a number of opportunities for increased patient loyalty for independents, said Lakhani. ‘As a patient, if you have the choice of going to a practice that has lots of diagnostic equipment with AI that can actually spot conditions, then it’s likely the reaction to seeing detailed diagrams and analysis is going to be positive. An independent practice can definitely subscribe to use an AI service and then essentially pass on the costs to the patient.
‘If you explain the pathologies well, you will probably have a patient coming back forever, because they have faith in what you’re doing. That’s certainly one of the reasons why the multiples are interested as well – retention is one of the best metrics for return on capital.
If a practice does not have an OCT, the Pegasus system works with fundus images, but will not be able to spot conditions such as macular degeneration. The software is also compatible with Optomap images with a wider field of view within the image.
‘AI is even more useful there because you’ve got more data that needs to be analysed quickly, so there’s an increased risk of missing pathologies,’ said Lakhani. ‘In the peripheral parts of the Optomap scans you get artefacts that might look like diabetic retinopathy, but could actually be something else.’
For an independent practice, the investment in an OCT device is significant, but according to Lakhani, AI offers a way of de-risking that purchase and presents a good opportunity for increasing return on investment while offering improved clinical care.
Multiple view: Professor Harrison Weisinger, group professional services advancement director, Specsavers
At the outset, machine learning – which is a subset of AI – has the potential to outperform humans in processing or decision-making tasks that require the rapid analysis of large quantities of data. A familiar example of machine learning is the detection of fraudulent credit card transactions by programs that can simultaneously assess a multitude of parameters including the amount, time and location of the purchase as well as the item category itself (against a known spending pattern).
Professor Harrison Weisinger
If you mainly use your credit card for groceries or going to the local cinema, you should not be surprised to get a real-time call from the bank if you decide to buy curtains from a bricks and mortar speciality store in Hungary. This type of application is also what allows online retailers to make effective ‘suggested for you’ offerings.
Having said that, health practitioners – including optometrists – process an extraordinary amount of information in arriving at a clinical finding or recommendation. Often this information is non-verbal or arises from ‘gut-feel’, such as when a patient hesitates to answer the question of ‘is it better one…. or two?’. In that respect, it will be a long time before computer programs (armed with an array of sensors) could adequately substitute for an experienced health-care practitioner.
Of course, this is not to say machine learning has no role in our industry – indeed, there are various opportunities for machine learning to improve the provision of eyecare and optometry to the advantage of patients and customers.
The most obvious example of this is in the interpretation of ophthalmic images. There are literally dozens of labs and companies in the midst of developing applications to detect any number of eye conditions from retinal photographs and OCTs. Specsavers has remained very close to many of these, as we see great value in providing clinical decision support for our practitioners. Whether it be suggesting a case of glaucoma or ruling out clinically significant pathology on an OCT, patients and their optometrists will be the winners. Another major winner will be the health care system, through potential reductions in false positive referrals. Indeed, there are several examples of health services exploring the use of automated image analysis in the context of a diabetic retinal screening program.
In all likelihood, such applications will start as quality assurance tools. I expect these applications will be available to individual practitioners, either by subscription or possibly as a product incorporated into each device. Over time, as they improve capability (through incorporating input from other channels such as the information within the clinical record), they will transition through clinical decision support, to triage and possibly ultimately become the primary diagnostician. This assumption is predicated upon these applications gaining access to and ‘learning’ from huge networks of data – and given the barriers presented by regulation and IT infrastructure, this will be a long time from now.
Machine learning is unlikely to get much pushback from practitioners because at the end of the day, everyone wants a better outcome for their patients.
Trust and comfort will develop as the products improve but in reality, it will not feel any different to when a field analyser suggests a defect in the visual field, or an OCT suggests thinning of the retinal nerve fibre layer.