Crynodeb
This work introduces an architecture for a robotic active
vision system equipped with a manipulator that is able to integrate
visual and non-visual (tactile) sensorimotor experiences. Inspired by
the human vision system, we have implemented a strict separation
of object location (where-data) and object features (what-data) in the
visual data stream. This separation of what- and where-data has computational advantage but requires sequential fixation of visual cues in order to create and update a coherent view of the world. Hence, visual attention mechanisms must be put in place to decide which is
the most task-relevant cue to fixate next. Regarding object manipulation many task relevant object properties (e.g. tactile feedback)
are not necessarily related to visual features. Therefore, it is important that non-visual object features can influence visual attention. We
present and demonstrate visual attention mechanisms for an active
vision system that are modulated by visual and non-visual object features.
| Iaith wreiddiol | Saesneg |
|---|---|
| Tudalennau | 21-29 |
| Nifer y tudalennau | 9 |
| Statws | Cyhoeddwyd - Ebr 2011 |
| Digwyddiad | Proceedings of the AISB Symposium on Architectures for Active Vision - York, Teyrnas Unedig Prydain Fawr a Gogledd Iwerddon Hyd: 04 Ebr 2011 → 07 Ebr 2011 |
Cynhadledd
| Cynhadledd | Proceedings of the AISB Symposium on Architectures for Active Vision |
|---|---|
| Gwlad/Tiriogaeth | Teyrnas Unedig Prydain Fawr a Gogledd Iwerddon |
| Dinas | York |
| Cyfnod | 04 Ebr 2011 → 07 Ebr 2011 |
Ôl bys
Gweld gwybodaeth am bynciau ymchwil 'Multi-modal visual attention for robotics active vision systems - A reference architecture'. Gyda’i gilydd, maen nhw’n ffurfio ôl bys unigryw.Dyfynnu hyn
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver