US engineers have found that the key to robots and self-driving cars ‘seeing’ the world more naturally and safely may be optical coherence tomography (OCT).
Writing in Nature Communications, the research team, led by Dr Joseph Izatt, a professor of biomedical engineering and ophthalmology at Duke University, said many robotics companies use Light Detection and Ranging (LiDAR) in their sensors, which detects the environment using reflected changes to short pulses of laser. However, other LiDAR systems or even sunlight can overwhelm the detector, it has limited depth resolution and it can take an impracticably long time to scan large areas. So the Duke team used their knowledge of OCT to modify frequency-modulated continuous wave (FMCW) LiDAR, which uses the same principles as OCT. Their method processed visual data 25 times faster than FMCW LiDAR while still achieving submillimetre depth accuracy and much greater range than traditional LiDAR. “These are exactly the capabilities needed for robots to see and interact with humans safely or even to replace avatars with live 3D video in augmented reality,” said Prof Izatt.