Abstract
This paper presents a developmental learning approach for eye-hand coordination in autonomous robotic system. Essential elements of this approach are inspired by current findings in neural research and developmental psychology. The integration of robot reaching and saccadic eye move ments is based on a substrate that combines two different proprioception sensor qualities; we call this substrate a visual memory map. The approach provides developmental learning of reaching, of saccadic movements, and the coordination of both tasks. In this paper, we focus on the learning of coordination, and introduce experiments that demonstrate our learning process of eye-hand coordination. The learning algorithm creates cross-modal links between sensorimotor maps and proprioceptive maps. As we will show, this learning algorithm supports a fast and incremental learning of behavioral competence.
Original language | English |
---|---|
Pages | 13 |
Number of pages | 27 |
Publication status | Published - 2009 |
Event | Artificial Intelligence and Applications - Innsbruck, Austria Duration: 17 Feb 2009 → 18 Feb 2009 |
Conference
Conference | Artificial Intelligence and Applications |
---|---|
Country/Territory | Austria |
City | Innsbruck |
Period | 17 Feb 2009 → 18 Feb 2009 |