This work introduces an architecture for a robotic active vision system equipped with a manipulator that is able to integrate visual and non-visual (tactile) sensorimotor experiences. Inspired by the human vision system, we have implemented a strict separation of object location (where-data) and object features (what-data) in the visual data stream. This separation of what- and where-data has computational advantage but requires sequential ﬁxation of visual cues in order to create and update a coherent view of the world. Hence, visual attention mechanisms must be put in place to decide which is the most task-relevant cue to ﬁxate next. Regarding object manipulation many task relevant object properties (e.g. tactile feedback) are not necessarily related to visual features. Therefore, it is important that non-visual object features can inﬂuence visual attention. We present and demonstrate visual attention mechanisms for an active vision system that are modulated by visual and non-visual object features.
|Number of pages||9|
|Publication status||Published - Apr 2011|
|Event||Proceedings of the AISB Symposium on Architectures for Active Vision - York, United Kingdom of Great Britain and Northern Ireland|
Duration: 04 Apr 2011 → 07 Apr 2011
|Conference||Proceedings of the AISB Symposium on Architectures for Active Vision|
|Country/Territory||United Kingdom of Great Britain and Northern Ireland|
|Period||04 Apr 2011 → 07 Apr 2011|