Merging action models to non-monotonic and deontic logics in order to simulate ethical reasonings

Type
Doctoral project
Start date
1 Oct 2020
End date
30 Sep 2023

Merging action models to non-monotonic and deontic logics in order to simulate ethical reasonings

Start date
1 Oct 2020
End date
30 Sep 2023
Type
Doctoral project

The project is dedicated to computational ethics and focuses on modeling ethical reasoning with logic-based AI techniques.

The project is dedicated to computational ethics and focuses on modeling ethical reasoning with logic-based AI techniques. The difficulty comes from the triple constraint of any ethical reasoning which: 

  • needs to consider the reasonably anticipated consequences of actions, which means that it would be necessary to introduce causal and/or action models
  • has to deal with rules of duty, that is, with obligation, permission, omission and prohibition, i.e. with deontic modalities. A natural way to consider this would be to use modal logics, and more precisely deontic logics. However, the most current deontic logics with clear mathematical semantics are very constrained.
  • has to overcome ethical dilemmas, i.e. conflicts of norms, which is very difficult using classical logics that fail to deal with inconsistencies. So, we shall use non-monotonic formalisms like default logics, which have been designed to get through logical contradictions.

In the past, the ACASA (Agents Cognitifs et Apprentissage Symbolique Automatique) team at LIP6 has used a non-monotonic formalism based on stable models, that is, Answer Set Programming to model ethical reasoning. However, it lacks a causal models and deontic modalities. Some works have made use of deontic logics and even of defeasible deontic logics, but they hardly manage with non-monotonicity and they do not include causal models. Others have tried to include causal models or Action Languages but they do not really deal with ethical conflicts nor make use of deontic modalities. This thesis will have to proceed with our preliminary work with Answer Set Programming and with Action Languages to try to solve the above-mentioned triple constraint, i.e. evaluating consequences, deal with deontic modalities and overcome conflicts of norms.

The developed models will have to be implemented and validated on real cases of health technologies and social robotics on the one hand, mobility and autonomous cars on the other that will be given by our partners.

 

PhD student: Camilo Sarmiento 

PhD supervisor: Jean-Gabriel Ganascia 

Research laboratory: LIP6 - Laboratoire de Recherche en Informatique