Date: 21th of october 2024 at 2PM
Place: Thesis room, at the Hannah Arendt campus of Avignon Université.
The videoconference link is the following: https://bbb.univ-avignon.fr/rooms/vtj-xje-xex-gyw/join .
The jury will be composed of:
Dr Aurélie Clodic, LAAS-CNRS, Rapporteure
Pr Julien Pinquier, Université de Toulouse, IRIT, Rapporteur
Pr Laurence Devillers, Sorbonne Université, LISN-CNRS, Examinatrice
Pr Olivier Alata, Université Jean Monnet, Laboratoire Hubert Curien, Examinateur
Pr Fabrice Lefèvre, Avignon Université, LIA, Directeur de thèse
Dr Bassam Jabaian, Avignon Université, LIA, Co-encadrant
Title: Proactive multimodal human-robot interaction in a hospital
In this thesis, we focus on creating a proactive multimodal system for the social robot Pepper, designed for a hospital waiting room. To achieve this, we developed a cognitive human-robot interaction architecture, based on a continuous loop of perceptions, representation, and decision-making. The flow of perceptions is divided into two steps: first, retrieving data from the robot’s sensors, and then enriching it through refining modules. A speaker diarization refining module, based on a Bayesian model of fusion of audio and visual perceptions through spatial coincidence, was integrated. To enable proactive action, we designed a model analyzing the users’ availability for interaction in a waiting room.
The refined perceptions are then organized and aligned to create a constantly updated representation of the environment. This image of the environment is then transmitted to the decision-making layer. There, an action planning module analyzes environmental data and develops action strategies, asynchronously informing action modules. This ability to function asynchronously allows the action planner to keep looking for proactive opportunities offered by the scene, despite the functioning of one of the action submodules, such as the speech module, which is responsible for conversing with a user during an interaction. The entire system is implemented on ROS, allowing it to be adapted to various robotic platforms.
This thesis presents the mechanisms necessary for creating a proactive multimodal human-robot interaction system. This system includes all perception and action modules, as well as an overall cognitive architecture for managing perceptions. The entire system was tested in a controlled laboratory environment, as well as in real-life conditions at the Broca Hospital.