The project is dedicated to computational ethics and focuses on modeling ethical reasoning with logic-based AI techniques. The difficulty comes from the triple constraint of any ethical reasoning which: 

In the past, the ACASA (Agents Cognitifs et Apprentissage Symbolique Automatique) team at LIP6 has used a non-monotonic formalism based on stable models, that is, Answer Set Programming to model ethical reasoning. However, it lacks a causal models and deontic modalities. Some works have made use of deontic logics and even of defeasible deontic logics, but they hardly manage with non-monotonicity and they do not include causal models. Others have tried to include causal models or Action Languages but they do not really deal with ethical conflicts nor make use of deontic modalities. This thesis will have to proceed with our preliminary work with Answer Set Programming and with Action Languages to try to solve the above-mentioned triple constraint, i.e. evaluating consequences, deal with deontic modalities and overcome conflicts of norms.

The developed models will have to be implemented and validated on real cases of health technologies and social robotics on the one hand, mobility and autonomous cars on the other that will be given by our partners.

 

PhD student: Camilo Sarmiento 

PhD supervisor: Jean-Gabriel Ganascia 

Research laboratory: LIP6 - Laboratoire de Recherche en Informatique