Appearance-based heading estimation: The visual compass

Research output: Book/ReportOther report

22 Downloads (Pure)


In this report we present an algorithm to estimate the heading of a robot relative to a heading specified at the beginning of the process. This is done by computing the rotation of the robot between successive panoramic images, grabbed on the robot while it moves, using asub-symbolic method to match the images. The context of the work is Simultaneous Localisation And Mapping (SLAM) in unstructured and unmodified environments. As such, very little assumptions are made about the environment; the few made are much more reasonable and less constraining than the ones usually made in such work. The algorithm's performance depends on the value of a number of parameters, values being determined to provide overall good performance of the system. The performance is evaluated in different situations (trajectories and environments) with the same parameters and the results show that the method performs adequately for its intended use. In particular, the error is shown to be drifting slowly, in fact much slower than un-processed inertial sensors, thus only requiring unfrequent re-alignment, for example when re-localising in a topological map.
Original languageEnglish
PublisherPrifysgol Aberystwyth | Aberystwyth University
Number of pages33
Publication statusPublished - 16 Mar 2006


  • navigation
  • visual compass
  • robot


Dive into the research topics of 'Appearance-based heading estimation: The visual compass'. Together they form a unique fingerprint.

Cite this