The visual compass: performance and limitations of an appearance-based method

Research output: Contribution to journalArticlepeer-review

61 Citations (SciVal)
208 Downloads (Pure)


In this article we present an algorithm to estimate the orientation of a robot relative to an orientation specified at the beginning of the process. This is done by computing the rotation of the robot between successive panoramic images, grabbed on the robot while it moves, using a subsymbolic method to match the images. The context of the work is Simultaneous Localization And Mapping (SLAM) in unstructured and unmodified environments. As such, very few assumptions are made about the environment and the robot's displacement. The algorithm's performance depends on the value of a number of parameters being determined to provide overall good performance of the system. The performance is evaluated in different situations (trajectories and environments) with the same parameters and the results show that the method performs adequately for its intended use. In particular, the error is shown to be drifting slowly, in fact much slower than unprocessed inertial sensors, thus only requiring infrequent realignment, for example when relocalizing in a topological map. Limitations of the proposed methods are also shown and discussed.
Original languageEnglish
Pages (from-to)913-941
Number of pages29
JournalJournal of Field Robotics
Issue number10
Publication statusPublished - 19 Oct 2006


Dive into the research topics of 'The visual compass: performance and limitations of an appearance-based method'. Together they form a unique fingerprint.

Cite this