Interpretability of neural networks

Type
Doctoral project
Start date
1 Oct 2020
End date
30 Sep 2023

Interpretability of neural networks

Start date
1 Oct 2020
End date
30 Sep 2023
Type
Doctoral project

This research project focuses on the interpretability of neural networks, which has multiple definitions.

It is possible to define minimum requirements for interpretability thanks to the triptych "simplicity, stability and predictability". The predictivity of neural networks being no longer to be demonstrated, our research project focuses on the creation of simple and stable neural networks, while maintaining their good predictive performance. The research project is therefore broken down into two axes: the first one focuses on the creation of simple neural networks and the second one on the creation of stable neural networks. We have identified two well-defined tasks within each of these axes:

  • Task 1 - Simplicity and decision trees: Designing simple and pro-bottom neural networks taking advantage of the connection with decision trees
  • Task 2 - Simplicity and Importance of Variables: To analyze and propose new heuristics concerning the use of importance indices in neural networks
  • Task 3 - Initialization stability: Analyze and compare several types of initialization based notably on the connection between neural networks and forests in order to produce neural networks robust to errors in the data
  • Task 4 - Optimization stability: To study in detail the phenomenon of double descent in order to decide on its usefulness in neural networks. 

 

PhD student: Ludovic Arnould

PhD supervisor: Gérard Biau

Research laboratory: LPSM - Laboratoire de Probabilités, Statistique, et Modélisation