Abstract
Substantial evidence supports the role of lateral intraparietal region (LIP) of the brain as the central processing point where bottom-up visual information is modulated by top-down task information from higher cortical structures. It also contains a global egocentric as opposed to a local retinotopic mapping and thus is also considered critical for the accumulation of a coherent view of the surrounding environment in the context of an ever changing visual scene.
We have developed an active vision system architecture based on the LIP structure as its central element. This architecture, as an extension of that previously presented, now considers feature data and has the ability to modulate visual search according to specific object properties. This architecture is discussed in terms of its ability to generate visual search for active robotic vision systems.
Original language | English |
---|---|
Pages | 167-168 |
Number of pages | 2 |
Publication status | Published - Nov 2010 |