Task Modulated Active Vision For Advanced Human-Robot Interaction

Martin Siegfried Hülse, Sebastian Daryl McBride, Mark Howard Lee

Research output: Contribution to journalArticlepeer-review

3 Citations (SciVal)


Eye fixation and gaze fixation patterns in general play an important part when humans interact with each other. Also, gaze fixation patterns of humans are highly determined by the task they perform. Our assumption is that meaningful humanrobot interaction with robots having active vision components (such a humanoids) is highly supported if the robot system is able to create task modulated fixation patterns. We present an architecture for a robot active vision system equipped with one manipulator where we demonstrate the generation of task modulated gaze control, meaning that fixation patterns are in accordance with a specific task the robot has to perform. Experiments demonstrate different strategies of multi-modal task modulation for robotic active vision where visual and nonvisual features (tactile feedback) determine gaze fixation patterns. The results are discussed in comparison to purely saliency based strategies toward visual attention and gaze control. The major advantages of our approach to multi-modal task modulation is that the active vision system can generate, first, active avoidance of objects, and second, active engagement with objects. Such behaviors cannot be generated by current approaches of visual attention which are based on saliency models only, but they are important for mimicking human-like gaze fixation patterns.
Original languageEnglish
Number of pages23
JournalInternational Journal of Humanoid Robotics
Issue number3
Publication statusPublished - Sept 2012


  • active vision
  • multi-modal visual attention
  • gaze fixation patterns
  • inhibition of return
  • human-robot interaction


Dive into the research topics of 'Task Modulated Active Vision For Advanced Human-Robot Interaction'. Together they form a unique fingerprint.

Cite this