This paper describes a novel image-based method for robot orientation estimation based on a single omnidirectional camera. The estimation of orientation is computed by finding the best pixel-wise match between images as a function of the rotation of the second image. This is done either using the first image as the reference image or with a moving reference image. Three datasets were collected in different scenarios along a “Gummy Bear” path in outdoor environments. This carefully designed path has the appearance of a gummy bear in profile, and provides many curves and sets of image pairs that are challenging for visual robot localisation. We compare our method to a feature-based method using SIFT and another appearance-based visual compass. Experimental results demonstrate that the appearance-based methods perform well and more consistently than the feature based method, especially when the compared images were grabbed at positions far apart.
|Enw||Lecture Notes in Artificial Intelligence|
|Cynhadledd||14th Annual Conference, TAROS 2013|
|Gwlad/Tiriogaeth||Teyrnas Unedig Prydain Fawr a Gogledd Iwerddon|
|Cyfnod||28 Awst 2013 → 30 Awst 2013|