Context and Motivation

Machine learning models are part of our daily lives and their weaknesses can have a significant (direct or indirect) impact on our society. This raises a number of questions about machine learning algorithms and how much users should trust them. Although there is no clear consensus yet on the definition of trust in machine learning, recurring themes such as data protection, social bias/fairness or robustness to data or input corruption often resurface. These concerns are sometimes jointly referred to as trustworthy machine learning and have recently attracted a lot of attention. Furthermore, the legal framework in Europe is evolving, forcing practitioners, both in the private and public sectors, to adapt quickly to these concerns. Essentially, the deployment of machine learning in real-world systems and the risk this can induce makes trust an important issue for both academia and industry. The aim of this “thematic morning on trustworthy machine learning” is to bring together researchers who work in this (broad) field or who are simply interested in this topic in order to facilitate fruitful discussions and possible collaborations.

Organization

The thematic morning will be held from 9 a.m. to 1 p.m. on June 6, 2024 at the Sorbonne Center for Artificial Intelligence (SCAI). This event is structured around presentation trees, each addressing an aspect of trustworthy Machine Learning. More specifically, we are very pleased to welcome the following speakers that will discuss three important topics in trustworthy machine learning.

Program

9:00-9:05: Welcoming  statement 

9:05-10:00: Solenne Gaucher (ENSAE) will talk about fairness-aware machine learning algorithms

10:05-11:00: Teddy Furon (INRIA) will talk about robustness to input corruption and stability to noise in machine learning

11:05-12:00: Marc Tommasi (Université de Lille) will talk about privacy-preserving and decentralized machine learning

12:00-13:00: Lunch at SCAI.

Registration

You have to register your participation by filling the following form.