Multi-modal visual attention for robotics active vision systems - A reference architecture

Martin Hülse, Sebastian McBride, Mark Lee

Allbwn ymchwil: Cyfraniad at gynhadleddPapur

1 Dyfyniad (Scopus)

Crynodeb

This work introduces an architecture for a robotic active vision system equipped with a manipulator that is able to integrate visual and non-visual (tactile) sensorimotor experiences. Inspired by the human vision system, we have implemented a strict separation of object location (where-data) and object features (what-data) in the visual data stream. This separation of what- and where-data has computational advantage but requires sequential fixation of visual cues in order to create and update a coherent view of the world. Hence, visual attention mechanisms must be put in place to decide which is the most task-relevant cue to fixate next. Regarding object manipulation many task relevant object properties (e.g. tactile feedback) are not necessarily related to visual features. Therefore, it is important that non-visual object features can influence visual attention. We present and demonstrate visual attention mechanisms for an active vision system that are modulated by visual and non-visual object features.
Iaith wreiddiolSaesneg
Tudalennau21-29
Nifer y tudalennau9
StatwsCyhoeddwyd - Ebr 2011
DigwyddiadProceedings of the AISB Symposium on Architectures for Active Vision - York, Teyrnas Unedig Prydain Fawr a Gogledd Iwerddon
Hyd: 04 Ebr 201107 Ebr 2011

Cynhadledd

CynhadleddProceedings of the AISB Symposium on Architectures for Active Vision
Gwlad/TiriogaethTeyrnas Unedig Prydain Fawr a Gogledd Iwerddon
DinasYork
Cyfnod04 Ebr 201107 Ebr 2011

Ôl bys

Gweld gwybodaeth am bynciau ymchwil 'Multi-modal visual attention for robotics active vision systems - A reference architecture'. Gyda’i gilydd, maen nhw’n ffurfio ôl bys unigryw.

Dyfynnu hyn