Colloquium organized by Alexandre Gefen (CNRS-Sorbonne Nouvelle), Glenn Roe (Sorbonne-Université), Ayla Rigouts Terryn (Université de Montréal) and Michael Sinatra (Université de Montréal).

In association with l’Observatoire des textes, des idées et des corpus (ObTIC), Centre de recherche interuniversitaire sur les humanités numériques (CRIHN) et Huma-Num.

Program

Wednesday, July 3, 2025 @ Paris Institute for Advanced Study

9:30–9:45 AM: Welcome remarks by the organizers
9:45–10:45 AM: Keynote Lecture by Geoffrey Rockwell (University of Alberta), “Care and Repair for Responsibility Practices in Artificial Intelligence”

10:45–11:45 AM: Session #1 (Chair: Alexandre Gefen)

11:45 AM–12:15 PM: Coffee break

12:15–1:15 PM: Session #2 (Chair: Ayla Rigouts Terryn)

1:15–2:15 PM: Lunch break

2:15–3:15 PM: Session #3 (Chair: Michael Sinatra)

3:15–3:45 PM: Coffee break

3:45–4:45 PM: Session #4 (Chair: Glenn Roe)

Friday, July 4, 2025 @ Campus Pierre et Marie Curie, Amphi 55B

9:00–10:00 AM: Session #5 (Chair: Alexandre Gefen)

10:00–10:30 AM: Coffee break

10:30 AM–12:00 PM: Session #6 (Chair: Michael Sinatra)

12:00–1:00 PM: Lunch break

1:00–2:30 PM: Session #7 (Chair: Ayla Rigouts Terryn)

2:30–3:00 PM: Coffee break

3:00–4:00 PM: Session #8 (Chair: Glenn Roe)

4:00–4:15 PM: Closing remarks by the organizers

Thank you for filling out this form to register for the conference and gain access to the buildings.

-------------------------------------------------------------------------------------------------------------

Large Language Models (LLMs)—whether widely-used, highly-aligned systems like ChatGPT or open-source alternatives—have, in just a few years, demonstrated remarkable capacities for translating, analyzing, rewriting, synthesizing documents, and even generating computer code. They have established themselves as revolutionary tools for augmenting our linguistic and cognitive capabilities. Digital Humanities (DH), which have long employed machine learning-based tools (such as text clustering, topic modeling, embeddings, and vectorial analyses), are now faced with a new question: how might LLMs be used within the field? What traditional tasks can they undertake, and what new forms of analysis might they enable? Beyond their utility for tasks such as named entity recognition or sentiment analysis, these models seem poised to facilitate unprecedented or significantly accelerated textual analyses—character identification, classification of texts based on narrative modalities, thematic analysis, and more. They also offer substantial efficiencies for tasks like scripting or creating visualizations through AI-driven code capacities. Looking further ahead, tools such as Retrieval-Augmented Generation or fine-tuning suggest the possibility of engaging directly with texts through question-and-answer modalities, opening up a whole new domain of innovative analysis—not to mention the potential to generate images informed by textual analysis. At the same time, these practices raise concerns, particularly around the biases inherent in LLMs and the challenges they pose to interpretablitiy, explainability and falsifiability, given their tendency to produce ‘hallucinations’.

This conference aims to explore these horizons by welcoming both general reflections and innovative experiments.