Dr. Michael Milford from Queensland University of Technology is researching visual-based navigation, which could replace capital-intensive satellite GPS systems, using camera technology and simple mathematical algorithms to uniquely identify locations. This decentralized approach could widen the technology’s scope and improve its accuracy, while also making navigating a much cheaper and simpler task.
This new approach to visual navigation algorithms has been dubbed “SeqSLAM” (Sequence Simultaneous Localisation and Mapping) and it uses local best match and sequence recognition components to lock in locations. Dr. Milford explains how it works:
SeqSLAM uses the assumption that you are already in a specific location and tests that assumption over and over again. For example if I am in a kitchen in an office block, the algorithm makes the assumption I’m in the office block, looks around and identifies signs that match a kitchen. Then if I stepped out into the corridor it would test to see if the corridor matches the corridor in the existing data of the office block layout. If you keep moving around and repeat the sequence for long enough you are able to uniquely identify where in the world you are using those images and simple mathematical algorithms.
Dr. Milford is going to present his paper SeqSLAM: Visual Route-Based Navigation for Sunny Summer Days and Stormy Winter Nights at the International Conference on Robotics and Automation in America later this year.