Generally, such agents set their own goal in a predefined goal space through intrinsic motivations, and they call upon Reinforcement Learning (RL) mechanisms to improve their capability to reach these goals. In this context, a central issue is how can such agents learn their own representations of various goals of interest without the corresponding spaces being predefined by hand. Thus far, this question has been addressed from the perspective of isolated agents learning only from sensorimotor interaction with their environment. However, as humans, a lot of the representations we learn are obtained through various forms of cultural interaction with other people. 

Thus the question of this PhD is “How can interaction with external agents influence the representational learning capabilities of an autonomous agent?” 

The central challenge will consist in enriching the existing models of goal representation learning from the AI literature so that they can incorporate information form external agents. To approach this question, the PhD work will start by a literature review on developmental psychology, looking for constraints on the potential models. 

A secondary challenge will consist in reconsidering the deep RL processes standardly used in interactive learning, so that they can better account for the capabilities observed in children. In particular, these processes are model-free and focus on goal achievement rather than on goal understanding. The fact that understanding goals precedes the capability to reach them speaks in favor of model-based algorithms that are not well developed in the literature. 

If the thesis is successful, it will provide an original building block in the more general effort to design the next generation of more natural human-to-agent interactive learning methods.