Interpretability in Artificial Intelligence (AI) tackles with what can be considered today as the Achilles heel of modern AI, with a particular modern flavor concerning the Deep Learning (DL): the lack of readability, traceability, explainability.

Essential for a proper adoption and a knowledgeable and bias-free use, these characteristics become a real milestone on the path of the on-going AI/DL revolution. Without a proper interpretability of our tools and algorithms, DL will still remain a black-box for most of biomedical users. We need to remember that they are the one to take the full responsibility of their acts, nowadays. Apart from its major interest in the current scientific current (bearer of quite exceptional scientific impact), the subject also opens the way to a modern framework of collaborative research, capable of sustainable multidisciplinary support biomedical research by deeply understanding microglial cells and their major role in Frontotemporal dementia (FTD) and Amyotrophic Lateral Sclerosis (ALS) neurodegenerative diseases via organoids: a more realistic biological model. Indeed, this model constitutes an excellent intermediary for understanding diseases (the patient's cells will be directly used to cultivate 3D models of organoids) as well as for designing and preparing - in a much more realistic way - the clinical trials.

 

PhD student: Mehdi OUNISSI

PhD supervisor: Daniel RACOCEANU

Research laboratory: Paris Brain Institute- Institut du Cerveau- ICM / INRIA TEAM (ARAMIS LAB)