Artificial intelligence (AI) use in eyecare is moving fast, but are regulation, ethics and real-world integration keeping pace? As clinicians and regulators grapple with how to harness its potential without compromising patient safety or equity, Drew Jones examines where AI use in eyecare stands and what comes next
An interim position statement from the UK’s College of Optometrists says artificial intelligence (AI) offers significant potential to enhance eyecare, from improving diagnostic accuracy to streamlining workflows and supporting better patient outcomes. But there is a catch: innovation is outpacing regulation. “[AI’s] safe and effective integration into clinical practice requires an informed, evidence-based approach to protect patient safety, comply with regulations and ensure equitable and effective care. Currently, AI innovation is outpacing the regulatory regime and there is an ongoing state of regulatory flux,” the college says, urging an evidence-based approach to ensure safe and equitable care.
To address this, the College says workforce training is essential and that clinicians should develop AI literacy as part of their professional development. The college also emphasises the importance of recognising bias when assessing AI-generated insights. “This should include recognising any inherent biases in the datasets used to train the AI and user biases and understanding how these affect the use of AI-derived information in clinical reasoning and decision-making.” The UK college has established an AI Expert Advisory Group comprising eye health professionals and researchers, plus experts from the AI field, to identify the key issues relating to AI in optometry and primary eyecare.
Separately, the UK Association of Optometrists (AOP) launched an AI and Technology resource hub to help optometrists and dispensing opticians assess emerging tools and use them safely in practice.
AOP board and AI task group member Julie-Anne Little says AI is becoming a part of everyday life and its role in eyecare is growing. “AI already supports business efficiency in eyecare and it’s increasingly used to solve problems and assist clinical decision making by identifying patterns in data. Clinicians remain responsible for decisions and AI is never a replacement for professional judgement, so practitioners must understand a tool’s limitations.”
AOP policy and governance manager Paul Alexander says the hub is intended to give practitioners clear, accessible guidance to engage with emerging technologies safely and ethically, while AI task group member Fiona Buckmaster says AI tools used in healthcare must meet strict medical device regulations before reaching patients.
On this side of the world, regulatory bodies are responding. The Optometrists and Dispensing Opticians Board (ODOB) has published guidelines for AI use in optometry and dispensing, broadly aligning with the UK’s approach, says Suzanne Halpin, ODOB chief executive and registrar. “The Office of the Privacy Commissioner here in New Zealand also has a section of its website on AI, which may be helpful,” she adds. Even so, guidance remains fragmented and evolving, leaving clinicians to navigate uncertainty in the interim.

Suzanne Halpin
Gerhard Schlenther, the Royal Australian and New Zealand College of Ophthalmologists’ (RANZCO’s) head of Policy and Advocacy, says that while the college is aware that some Fellows might contribute to advancements in AI technology, “RANZCO does not have a position statement on AI. We take guidance from Ahpra [Australian Health Practitioner Regulation Agency] and MCNZ [Medical Council of New Zealand].”
Ahpra and the national boards released general healthcare guidance titled ‘Meeting your professional obligations when using artificial intelligence in healthcare’ in 2024, which was backed by the Optometry Board of Australia (OBA). OBA stressed optometrists must apply the same professional obligations to AI-supported care as they do to any other aspect of practice. The guidance says practitioners must understand an AI tool’s intended use and limitations, obtain appropriate transparency and consent where patient data are involved and “apply human judgement to any output of AI”.
MCNZ also published similarly general ‘Guidance on using artificial intelligence in patient care’ in March 2026.
Bias in, bias out
Equity concerns sit at the centre of the debate. The UK college warns that AI must reduce, not reinforce, health inequalities, requiring diverse datasets, regular bias audits and a commitment to inclusive care. The college says this requires “diverse datasets, regular bias audits and a commitment to inclusive care delivery”. This concern is echoed in Aotearoa, with the New Zealand Association of Optometrists (NZAO) immediate past president Hadyn Treanor saying that the risk of entrenched inequities is one of the biggest issues.

Haydn Treanor
While not strictly AI, OCT analysis software highlights the problem, he says. “These datasets have the capacity to use ethnic variations, but some of the databases for the ethnic groups are incredibly small,” he says. “In an ideal world, the software would add the data from these scans to its database to learn from, but this would require a secure way that the information could be anonymised.” If AI were learning only from scans performed in practices or hospitals where there was a suspicion of disease, the algorithm could determine that the disease state was ‘normal’, he says.
There is also a risk that algorithms trained on biased clinical samples could misinterpret disease as normal. Expanding datasets safely would require robust anonymisation systems, something not yet fully realised.
The rise of AI scribes
While diagnostic AI dominates headlines, one of the fastest-growing applications here is far more mundane: note-taking. AI scribes are beginning to reduce administrative burden, converting patient conversations into clinical notes and correspondence. Australia-based tool iScribe has already gained a foothold in ophthalmology.
Former Royal Australian College of General Practitioners president Dr Nicole Higgins says these tools can improve both workflow and patient experience by allowing clinicians to focus on the consultation rather than the screen. She says Australia-based company Akuru’s iScribe recently gained 15–20% of the Australian ophthalmology market. Such tools, Heidi being another commonly mentioned, can help automate parts of clinical note-taking. Dr Higgins cautions that patients must provide consent for AI scribes to be used during a consultation. She also says the onus is on medical practitioners to ensure the tool they use is compliant with local laws for safe data collection and storage.
At this stage, the number of New Zealand optometry practices using note-taking software remains small, says Treanor. “The need for it to be an add-on to existing practice management software [PMS] seems to be the biggest barrier to more widespread adoption. I expect that integration into existing PMSs would greatly increase its use.”
Local innovation, global questions
AI development is also happening in New Zealand. Wellington-based ophthalmology registrar Dr James Lewis has trained a neural network on more than 27,000 ocular images to detect and measure ptosis. He points to New Zealand’s strong privacy framework as a protective baseline, though acknowledges that clearer guidance is still needed as AI evolves.
“New Zealand has robust [privacy] protection under the existing legislative framework. Through statutory interpretation, it provides that no ‘distinctive feature or a personal connection of some kind’ should be disclosed without written consent. While guidelines are evolving, my hope is that such strong guidance protects patients in the interim,” Dr Lewis says.

Dr James Lewis
Beyond clinical use, AI carries environmental and resource implications. Google reported in 2025 that the median text prompt to its Gemini assistant used 0.24Wh (watt-hours) of electricity and 0.26ml of water, while OpenAI chief executive Sam Altman separately says the average ChatGPT query used about 0.34Wh (equivalent to turning on a 10W LED bulb for two minutes) and 0.32ml water. Although Microsoft-linked researchers also estimated a median 0.34Wh per query in a 2025 paper, they noted longer, reasoning-heavy queries could consume 4.32Wh of electricity – equivalent to 26 minutes’ use of a 10W LED bulb.
Companies are well aware of this and industry is beginning to respond. There are measures already in place to offset the environmental impact of AI, says Dr Lewis. He cites Spark’s partnership with Aventuur, a surf park creator, to use waste heat from a data centre to heat a surf lagoon on Auckland’s North Shore. “It’s an amazing example of the circular economy. With time, I expect that market forces and legislative pressures will drive companies to undertake proactive carbon mitigation strategies if they wish to remain palatable. You will see that the leading forces are already engaged in that.”
Humans in the loop
For now, AI remains firmly in a supporting role. Dr Lewis draws a parallel with aviation: autopilot has existed for decades, yet pilots remain essential. The same applies in eyecare, particularly when vision-threatening disease is at stake. AI systems are still prone to hallucinations and bias. Until these are resolved, clinician oversight is non-negotiable.
An often-overlooked consideration is that the increasing use of AI will change the caseload of practitioners and the emotional weight of such interactions, he adds. “It is easy to imagine a future where there are no 'easy' consultations, just a relentless stream of AI-optimised 'difficult cases'. Will workflow training and salary keep pace with this?”
In the end, nothing replaces the time and energy spent with a patient. Dr Lewis summed things up: “People remember not what you did, but how you made them feel. My vision is towards that future: where AI enables more conversation, less bias and more touch – meaning better outcomes for all.”
Ed’s note: Experts gathered in Auckland mid-April 2026 for the first ever Law, Technology and Government Conference, examining how governments can support innovation while managing the risks of new technologies. The conference, ‘AI policy, regulation and procurement in focus’, was designed to generate ideas supporting smart innovation, procurement and regulation, according to Associate Professor Marta Andhov, Auckland Law School, Business School. She says many governments are moving faster than the rules. “Governments are racing to purchase AI systems, often without the frameworks to buy them responsibly or properly regulate them.”