From explainable AI to understandable AI: critical, ethical, and cognitive perspectives

The growing importance of 'explainable AI' frameworks has recently sparked a debate about the meaning of the concepts of explanation and understanding in artificial intelligence. Although xAI was initially intended to increase user confidence, ensure regulatory compliance and promote a fair use of AI by making models more transparent, many critics have argued that intelligibility cannot be reduced to algorithmic traceability. 

This workshop proposes shifting the focus from explainable AI to understandable AI, which is understood as a broader, more contextualised epistemic, ethical and social process. Bringing together researchers from philosophy, cognitive science, AI ethics and management science, the workshop will examine the conditions under which AI systems can truly be understood by their designers, users and the institutions that regulate them. 

Critically, the workshop aims to provide interdisciplinary insights into key questions related to AI intelligibility. What forms of understanding are at play in human–AI interaction? How do organisational, institutional and regulatory contexts shape the production of so-called 'understandable' systems? Is transparency sufficient for understanding AI? Through a combination of normative and empirical approaches, the workshop will highlight the limitations of technical transparency and emphasise the importance of considering understanding as a critical, relational and contextualised relationship with artificial intelligence technologies.

 

Program

14h30-15h : Christophe Denis, Sorbonne Université | Institut de recherche pour le développement ; Unité Mixte internationale de Modélisation Mathématique et Informatique de Systèmes Complexes
From explainability to meaning-making: towards a hybrid and semiotic intelligibility of AI

15h – 15h30 : Thomas Souverain, CEA-Saclay, ENS Ulm & Sorbonne Paris XIII
Explaining AI in Three Dimensions: A Philosopher’s Perspective

15h30 – 16h : Léa Antonicelli, CEVIPOF - Sciences Po
Leveraging psychology to promote responsible AI: the case of information about training set 

16h – 16h15 : Pause café

16h15 – 16h45 : Ahmad Aidar, Institut Louis Bachelier.
Responsible AI is not only good governance, it is good business!

16h45 – 17h15 : Théophile Pénigaud de Mourgues, CEVIPOF (Sciences Po), Yale's Institution for Social and Policy Studies 
How Generative AI Threatens Public Reason and Accountability

To register

Zoom