Microsoft Kinect provides an off-the-shelf sensor that can be used to reliably capture information from body movements in real-time fashion. We implemented an on-line gesture recognition system on top of the kinect's hand tracking capabilities. The system is able to perform real time classification of the user hand gestures by comparing the current movement to a set of 9 predefined template gestures. Gestures are detected when the moving hand exceeds a threshold speed for a minimum duration.The ultimate goal of this work is to study action-outcome learning in humans: how does a person figure out what actions he can make that have an effect on the environment? How does he shape a gesture to produce this outcome? To this purpose, we improved the recognition algorithm by allowing the dictionary of template gestures to adapt according to the way the user performs the gestures. This allows the emergence of a shared representation of each gesture between the human and the computer, while the user interacts with the system. The approach opens new perspectives in designing and studying interactions between humans and machines as well as in studies of how motor-impaired patients interact with the system.
|Number of pages||7|
|Publication status||Published - 26 Sept 2011|
|Event||Capo Caccia Cognitive Neuromorphic Engineering Workshop - Aberystwyth, United Kingdom of Great Britain and Northern Ireland|
Duration: 01 May 2011 → 07 May 2011
|Conference||Capo Caccia Cognitive Neuromorphic Engineering Workshop|
|Country/Territory||United Kingdom of Great Britain and Northern Ireland|
|Period||01 May 2011 → 07 May 2011|