In this paper, we present an affordance learning system for robotic grasping. The system involves three important aspects: the affordance memory, synergy-based exploration, and a grasping control strategy using local sensor feedback. The affordance memory is modeled with a modified growing neural gas network that allows affordances to be learned quickly from a small dataset of human grasping and object features. After being trained offline, the affordance memory is used in the system to generate online motor commands for reaching and grasping control of the robot. When grasping new objects, the system can explore various grasp postures efficiently in the low dimensional synergy space because the synergies automatically avoid abnormal postures that are more likely to lead to failed grasps. Experimental results demonstrated that the affordance memory can generalize to grasp new objects and predict the effect of the grasp (i.e., the tactile patterns).