Appearance-based heading estimation: The visual compass

Allbwn ymchwil: Llyfr/AdroddiadAdroddiad arall

47 Wedi eu Llwytho i Lawr (Pure)

Crynodeb

In this report we present an algorithm to estimate the heading of a robot relative to a heading specified at the beginning of the process. This is done by computing the rotation of the robot between successive panoramic images, grabbed on the robot while it moves, using asub-symbolic method to match the images. The context of the work is Simultaneous Localisation And Mapping (SLAM) in unstructured and unmodified environments. As such, very little assumptions are made about the environment; the few made are much more reasonable and less constraining than the ones usually made in such work. The algorithm's performance depends on the value of a number of parameters, values being determined to provide overall good performance of the system. The performance is evaluated in different situations (trajectories and environments) with the same parameters and the results show that the method performs adequately for its intended use. In particular, the error is shown to be drifting slowly, in fact much slower than un-processed inertial sensors, thus only requiring unfrequent re-alignment, for example when re-localising in a topological map.
Iaith wreiddiolSaesneg
CyhoeddwrPrifysgol Aberystwyth | Aberystwyth University
Nifer y tudalennau33
StatwsCyhoeddwyd - 16 Maw 2006

Ôl bys

Gweld gwybodaeth am bynciau ymchwil 'Appearance-based heading estimation: The visual compass'. Gyda’i gilydd, maen nhw’n ffurfio ôl bys unigryw.

Dyfynnu hyn