Adaptive Vision for Human Robot Collaboration


Presenter: Dimitri Ognibene

Affiliation: CSEE, University of Essex

Abstract:

Unstructured social environments, e.g. building sites, release an overwhelming amount of information, yet behaviourally  relevant variables may be not directly accessible. A key solutions found by nature to cope with such problems is Active Perception (AP), as is shown by many examples, such as the foveal anatomy and control of the eye.

Uncovering the design principles of systems that adaptively find and selects relevant information will have an impact on both Robotics and Cognitive Neuroscience.

The main insights coming from the development of two different Active Vision (AV) robotic models will be presented:

1) An information theoretic AV model for dynamic environments where achieving an effective behaviour requires the prompt recognition of the hidden states (e.g. intentions) and the  interactions (e.g. attraction), and spatial relationships between the elements in the environment. This general framework is described in the context of social interaction with  AV systems which support the anticipation of other agents’ goals [Ognibene & Demiris 2013] and the recognition of complex activities [Lee et al 2015];

2) A neural model of the development of AV strategies in ecological tasks, such as exploring and reaching rewarding objects in a class of similar  environments, the agent world. This model shows that an embodied agent can autonomously learn what are the behaviourally relevant contingencies in its world and how to use them to direct its perception [Ognibene & Baldassarre 2014].