Multimodal expressivity and alignment modeling for human-machine interaction

Type
Doctoral project
Start date
1 Sep 2019
End date
31 Aug 2022
Location
Paris

Multimodal expressivity and alignment modeling for human-machine interaction

Start date
1 Sep 2019
End date
31 Aug 2022
Type
Doctoral project
Location
Paris

This project takes place in a particularly rich context for the development of communication interfaces between human and machine.

For example, the emergence and democratization of personal assistants (smartphones, home assistants, chatbots) make interaction with the machine a daily reality for more and more individuals. This practice tends to be amplified and generalized to a large number of uses and practices of the human being: from reception agents (today, a few Pepper robots more for demo than for real use), remote consultation, or agents embedded in autonomous vehicles.