Abstract
A system is described which takes synergies extracted from human grasp experiments and maps these onto a robot vision and hand-arm platform to facilitate the transfer of skills \cite{tao2010}. This system forms part of a framework which is extended by adding a self organizing map based affordance learning system. This affordance system learns the correlations between perceived object features and relevant motor outputs expressed in the form of synergies, and comes to guide grasping of an object by predicting the appropriate synergy outputs for a given object. It does so online and autonomously. Preliminary results test its effectiveness in this role and show that it is capable of learning fast and in spite of noise.
Original language | English |
---|---|
Publication status | Published - 19 May 2010 |