Synergy-based affordance learning for robotic grasping

Tao Geng, James Wilson, Michael Sheldon, Mark Lee, Martin Hülse

Research output: Contribution to journalArticlepeer-review

4 Citations (SciVal)


In this paper, we present an affordance learning system for robotic grasping. The system involves three important aspects: the affordance memory, synergy-based exploration, and a grasping control strategy using local sensor feedback. The affordance memory is modeled with a modified growing neural gas network that allows affordances to be learned quickly from a small dataset of human grasping and object features. After being trained offline, the affordance memory is used in the system to generate online motor commands for reaching and grasping control of the robot. When grasping new objects, the system can explore various grasp postures efficiently in the low dimensional synergy space because the synergies automatically avoid abnormal postures that are more likely to lead to failed grasps. Experimental results demonstrated that the affordance memory can generalize to grasp new objects and predict the effect of the grasp (i.e., the tactile patterns).
Original languageEnglish
Pages (from-to)1626-1640
Number of pages15
JournalRobotics and Autonomous Systems
Issue number12
Early online date09 Jul 2013
Publication statusPublished - Dec 2013


  • synergy
  • affordances
  • grasping


Dive into the research topics of 'Synergy-based affordance learning for robotic grasping'. Together they form a unique fingerprint.

Cite this