Decision making in Education is increasingly supported by data-driven algorithms. However, the availability of the data, its variety (demographic, academic, behavioral, motivational, cognitive, etc.) and its statistical properties challenge such decisions that could affect the success or the orientation of students and educational policies on a larger scale. Therefore, it is crucial that the data sets but also the AI algorithms include fairness as part of their design.
Recent works have demonstrated that correcting some bias led to similar prediction performance of the learning algorithms used and a method was proposed to measure both the prediction performance and the fairness on different subgroups of the test set for more robustness. These are works that emphasize the importance of explainable and trustworthy AI when the decision could affect individuals, supported by the General Data Protection Regulation (GDPR) directives. Nonetheless, a fair decision should be based on several criteria, that need to be assessed.
In this research work, we will thus develop a method for multi-criteria analysis of the global fairness of the algorithms used in Educational Data Mining (EDM). In addition to focusing on bias evaluation and correction, we will explore different multi-criteria approaches that may include ensemble methods and meta learning. The expected results would contribute to the Fairness, Accountability, and Transparency (FAT*) community and to educational applications.
PhD student: Mélina Verger
PhD supervisor: Vanda Luengo
Research laboratory: LIP6 – Laboratoire d’Informatique de Paris 6