As technological innovation drives the development of AI systems, their capacity to produce outputs, influence, or make decisions in an increasing number of contexts may raise key questions about trustworthiness in their design, deployment, or usage. While the concept of trustworthiness in AI can have different meanings depending on the angle taken, recent advancements have propelled regulatory frameworks, such as the EU’s AI Act, into the spotlight. In addition to shaping industry practices, these legislative developments also stimulate and necessitate new avenues for scientific research.
The goal of this symposium is to exchange among key stakeholders of the scientific and policy communities—on what trustworthy AI means in theory and in practice, in particular in the context of the open Internet. Through this dialogue, we aim to address privacy, fairness, accountability, transparency, and explainability as the foundation of trustworthy AI.
This symposium held in anticipation and under the label of the Paris AI Action Summit, will showcase key initiatives from Criteo and other leading organizations. Through two general sessions and an accessible scientific session, we will explore how research initiatives enable us to anticipate and address regulatory needs, ultimately contributing to a more responsible, innovative, and, in the end, trustworthy ecosystem.