Being brought up with movies like Terminator, I always imagined artificial intelligence, or AI as it’s increasingly called, would be this terrifying army of thinking metal beings who saw humans as obsolete. It’s refreshing then to know that not only are we using AI in our lives each day (for example, Air New Zealand’s OSCAR bot), but that incredible advances in the way we can train computers to learn could lead to better healthcare outcomes for some of our most vulnerable patients – and a big chunk of it is happening in the Cloud.
AI lives among us
There have been some early wins for AI in the optics industry. A study of Google’s deep learning neural network for the diagnosis of diabetic retinopathy found that machine learning matched or outperformed the human experts in diagnosing and grading the severity of conditions (Gulshan et al, 2016). Last year, this tech was approved by the US Food and Drug Administration (FDA) for automated diabetic retinopathy grading. While in Maryland, Drs John Ladas and Uday Devgan have devised their own formula for IOL calculation, the Ladas Super Formula 1.0, and are now incorporating AI technology to further improve accuracy, believing it will revolutionise LASiK surgery. Not to be outdone, in the UK, London’s famous Moorfields Eye Hospital and University College London’s Institute of Ophthalmology have developed an AI system able to recommend the correct referral decision for more than 50 eye diseases with an accuracy of 94%.
Meanwhile, in New Zealand, Dunedin-based Medic Mind has developed an AI platform that can be trained to analyse medical images and learn to diagnose a range of conditions.
“You supply it with data sets of images, for example dogs and cats, and then train it to recognise what they are. Then you evaluate it by providing it with, say, another image of a dog and ask it what it sees,” explains Glenn Linde, Medic Mind’s chief technology officer.
Because it is AI, once you’ve given it the basic data it can then ‘learn’ for itself what the images are, he says. “You could supply it with an image of glaucoma and some images that are not glaucoma and it will train itself to learn whether a new image has glaucoma or not. We are currently using public data sets, but you can supply your own as well.”
The exciting thing about Medic Mind particularly, is that it’s not a single-use app for one type of condition, but an open source, cloud-based platform, currently free while it’s being beta-tested. Any clinician could use it, with the help of a developer, to create software to aid them in the diagnosis and treatment of patients.
“It’s basically drag and drop. You just go into the Cloud and then drop you own images in there and it will just go off and train itself,” says Linde. “Once it’s trained you can give the API to other people, make the neural network public or make it part of an app.”
Dr Sheng Chiong Hong, co-founder and director of oDocs, a New Zealand-based social enterprise designed to produce affordable eye care technology and backer of Medic Mind, says Medic Mind’s potential is huge. “It’s basically an AI producing factory. If a clinician had an idea, they could use the AI on our website to design an algorithm and export the API into various applications.”
Hong says although Medic Mind is learning from human users in a supervised and semi-supervised manner, it can grow to be a lot faster than humans. “We can train it in an hour to do what takes a human years to learn.”
That said, Hong is quick to note the scope of Medic Mind is limited to assisting medical professionals, not taking over their roles. “People worry about decision-making, around care for a patient, passing from human to machine. But Medic Mind doesn’t make decisions, it makes recommendations and opens up services to a wider range of people.”
In many ways though, the real advance here is not the technology itself but the willingness to share it on an open source platform. Open source AI libraries in the Cloud have been heralded as the future of the sector, making it cheaper and easier to use and removing the relative monopoly over AI advances so far held by big companies like Amazon, Apple and Google.
“The open source model has become the dominant model, as major players like Google, Amazon and Microsoft seek to take advantage of the ‘network effect’,” says Ben Reid, executive director of the Artificial Intelligence Forum of New Zealand, who released a report in May 2018 about AI in Aotearoa, Artificial intelligence: shaping the future of New Zealand.
“By virtue of having this broad participation and usage across the industry you get this network effect of quality, of variety, of people able to use and develop it.”
This goes some way to helping eliminate human bias in the data sets used for training AI, improving the objectivity of applications and the ultimate outcome. It also gives the software freedom to evolve in the direction in most demand and maintains the accountability and transparency around development, says Reid.
Like Hong, Reid is quick to reassure worried healthcare professionals they won’t lose their jobs to an ‘opti-bot’. “AI allows people to be more productive. It doesn’t replace people, it supports professional work. They are tools, they allow us to off-load repetitive manual tasks and focus on the more high-level stuff.”
The potential of AI has a huge knock-on effect in the area of telemedicine, opening up diagnostic services to groups of patients who may have previously struggled to access medical care and offering support to practitioners in remote areas.
“Apps and smartphones are just an extension of a computer that is more portable and more accessible,” says Hong, noting that AI platforms like Medic Mind are easily packaged into apps. “What is great about having eye care apps is that we can do a lot more things we couldn’t do previously. If you have a phone and a camera, you can do a consultation, extending the range of consultations into the comfort of a patient’s home.”
This could have a positive impact in countries like New Zealand and Australia, where there are patients living in remote areas who may find it hard to travel. It also affects those living in some of the more remote South Pacific Islands, which New Zealand currently supports with eye care services. For example, current diabetic screening in Fiji can involve an individual travelling for hours to see an optometrist in a major centre. Then they may have to wait weeks for the results. The cost and time to travel plus the delay in diagnosis could all result in a poor health outcome for the individual, depending on their circumstances. “But if we could incorporate AI technology like this [Medic Mind], they could be screened on location [via a smart phone] and get an immediate result,” says Hong. “The tech would then advise the patient what to do next, such as see an optometrist.”
This type of diagnostic tool is already being used in radiology and Hong anticipates it will be in common usage in optics within five to 10 years.
According to CCS Insight, the market for wearable tech will reach US$25bn by 2019, driven largely by the market for health-assistive technology. When it comes to optics, contact lenses are leading the wearable tech sector.
We’ve already got contact lenses (CLs) that do so many things beyond correcting vision. From changing with the light, such as Johnson & Johnson’s new Oasis Acuvue Transitions lens released last year, to measuring intraocular pressure, like the Triggerfish product released by Sensimed.
Coming through the pipeline are CLs that dispense medication, with at least one - OcuMedic’s drug dispensing CL product - currently in clinical trials. But the real end game seems to be connecting contact lenses to the Internet of Things - a network of devices, including vehicles and home appliances, that contain electronics and software allowing them to connect, interact and exchange data - where users can access cloud-based services for, say, their medical or recreational needs.
On returning from last year’s popular winter, biennial optometry conference Snowvision in Queenstown, Tauranga-based optometrist Alex Petty, was quick to share keynote speaker, American optometrist and inventor Jerome Legerton’s thoughts about the possibilities of future CLs. Just like a smart watch, smart CLs will connect to the internet and provide visual display results to the wearer. You could use them as a GPS, for example, or to search for an item you wish to buy. It all sounds very Mission Impossible, but Petty says Legerton’s presentation was very plausible. “The major issue is battery life. You may need to wear the lenses in conjunction with another wearable device or they may come fitted with solar batteries.”
But to date, smart contact lens progress has been slow. Alcon and Verily (Google’s life sciences tech development company) teamed up in 2014 to develop a smart CL that would help monitor diabetic patients’ glucose levels, but by 2016 progress had stalled and the plug was pulled in November last year. The tiny microchip-embedded lens, paired with a smartphone, hit a number of technological and biological stumbling blocks, from LED’s containing arsenic to research showing tears aren’t a reliable measure of glucose levels. Its glucose-monitoring CL might have been shelved, but Verily’s researchers are continuing to develop other eye-related technology, including a CL to thwart presbyopia and a new type of intraocular lens for cataract surgery.
Across the world, however, researchers from the Ulsan National Institute of Science and Technology in South Korea claim to have developed a glucose-measuring smart lens using opaque and flexible components that has been successfully tested in rabbits. While in 2016, Apple teamed up with medical supplies company, EPGL, to develop its own augmented reality lens and, last year, Samsung registered a patent in South Korea for a contact lens that projects images on to the eye.
“The dream of augmented reality contact lenses is real. It’s more about timing,” says Reid noting sometimes it takes a while for technologies’ capabilities to catch up with the ideas. “Some people were doing Palm Pilots back in the ‘90’s for example, but it didn’t go anywhere. It was a sign of what was to come – iPhone released their first model in 2007 and these days everyone relies on a smart phone.”
Once some of these tech tools become a reality, the optics industry will see a marked growth in the range of services it provides and the breadth of patients it will be providing to, according to Christopher Quinn, president of the American Optometric Association. “Doctors of optometry will need to be a part of the team if smart lenses are to succeed,” he said in a September 2017 interview. “We need experts in fabricating contact lenses in order to make this a reality. Plus, we’ll need doctors to fit the lenses and assess ocular health and lens performance.”
Clever drug systems
Many conditions today require a whole suite of responses, from eye pieces to surgery to occupational therapy. Often, built in to these responses is the need for patients to take certain drugs, which raises a whole pile of tech-related opportunities and issues, from how to administer and monitor the effect of the drugs to patient compliance.
Adhering to regular use of prescribed medications is one of the key issues for both recovery and stability for many eye conditions, and a constant pain in the neck for health care professionals and patients alike. The reasons why patients don’t take their medications are complex, rooted in everything from socio-economic issues to geographical factors. A piece in the New York Times from December 2017 noted the only approach to increasing medication compliance in the US that has consistently worked, is reducing the cost of those medications. However, in New Zealand, with our relatively robust public health system that includes heavily-subsidised prescription medications, it seems unlikely cost is a major factor in compliance.
One 2012 US study¹ found that 62% of participants claimed to have forgotten to take their meds, with age, perception of need for the medication and gender (men being the biggest problem here) coming in as common predictors. Many tech companies are looking into smart solutions for this age-old problem, with some technological breakthroughs for the controlled, slow release of medications already on the market and a host more expected to be released soon.
US firm Adhere Tech, for example, has launched a smart pill bottle which monitors patient use of the medication. Missed doses trigger reminders via text message, a phone call, an alarm on the bottle or even by alerting a local outreach team who can visit. The smart bottle can even report your non-compliance back to your practitioner. The bottle doesn’t need to be programmed, it works straight out of the box and doesn’t rely on the user having any other tech. “The average age of patients using the AdhereTech pill bottles is 70 years old. For some of those patients, an AdhereTech pill bottle is the first piece of technology they have used,” said Adhere Tech CEO Josh Stein in a recent interview. “Inside each pill bottle is a worldwide, cellular chip, the same technology inside a cell phone. Anywhere the pill bottle goes around the globe, sensors measuring use and contents beam the data back to us 24/7.” Plus, unlike some app-based counterparts, you don’t need a mobile phone, said Stein.
In New Zealand, medical device company Adherium has recently gone global with its Hallie smart inhaler and associated app that improves asthma medication adherence. The product, based on research from the University of Auckland, originally part-funded by Cure Kids, uses Bluetooth technology to allow users and their parents/carers to track compliance. Studies show a 59% increase in preventative medicine usage among adults using Hallie and an 80% reduction in hospital admissions for children.
While respiratory conditions are currently its main focus, the company has plans to extend its smart solutions to a range of medications to improve adherence for many more patients. “We have huge plans. Asthma and COPD are our starting points, but adherence is a much bigger issue than just those two diseases,” said founder and executive director Garth Sutherland after Adherium won the Most Innovative Hardware Product category at the 2017 Hi-Tech Awards. “What we’ve proven is our digital technology can have a big impact on chronic diseases. That’s what our focus is and that’s a $500bn [health] issue globally. It’s a big problem to crack. It needs to be dealt with and we’re only just getting going.”
Meanwhile, there’s a host of companies working on different controlled and slow-release-drug advancements, from surgically-implanted ports for patients with neovascular age-related macular degeneration to sustained-release drug delivery implants such as Ozurdex, a dexamethasone intravitreal implant for glaucoma patients, and Iluvien, a fluocinolone acetonide implant for diabetic macular oedema. Other companies are also working on plugs, extraocular rings, nanoparticles, stem cells and encapsulated cell technology in a bid to solve the problem of patient compliance. At some point, all of this will, no doubt, be linked to the Cloud so our health, our compliance and the technology that’s helping to keep us alive or stop us from becoming blind, can be monitored remotely.
1. Unintentional non-adherence to chronic prescription medications: how unintentional is it really? Gadkari, AS and McHorney, CA