Colloquium organized by Alexandre Gefen (CNRS-Sorbonne Nouvelle), Glenn Roe (Sorbonne-Université), Ayla Rigouts Terryn (Université de Montréal) and Michael Sinatra (Université de Montréal).

In association with l’Observatoire des textes, des idées et des corpus (ObTIC), Centre de recherche interuniversitaire sur les humanités numériques (CRIHN) et Huma-Num.

Plenary speakers : Clarisse Bardiot (Université Rennes 2) and Geoffrey Rockwell (University of Alberta)

Large Language Models (LLMs)—whether widely-used, highly-aligned systems like ChatGPT or open-source alternatives—have, in just a few years, demonstrated remarkable capacities for translating, analyzing, rewriting, synthesizing documents, and even generating computer code. They have established themselves as revolutionary tools for augmenting our linguistic and cognitive capabilities. Digital Humanities (DH), which have long employed machine learning-based tools (such as text clustering, topic modeling, embeddings, and vectorial analyses), are now faced with a new question: how might LLMs be used within the field? What traditional tasks can they undertake, and what new forms of analysis might they enable? Beyond their utility for tasks such as named entity recognition or sentiment analysis, these models seem poised to facilitate unprecedented or significantly accelerated textual analyses—character identification, classification of texts based on narrative modalities, thematic analysis, and more. They also offer substantial efficiencies for tasks like scripting or creating visualizations through AI-driven code capacities. Looking further ahead, tools such as Retrieval-Augmented Generation or fine-tuning suggest the possibility of engaging directly with texts through question-and-answer modalities, opening up a whole new domain of innovative analysis—not to mention the potential to generate images informed by textual analysis. At the same time, these practices raise concerns, particularly around the biases inherent in LLMs and the challenges they pose to interpretablitiy, explainability and falsifiability, given their tendency to produce ‘hallucinations’.

This conference aims to explore these horizons by welcoming both general reflections and innovative experiments.

Proposals should be sent to glennroe@gmail.com, gefen@fabula.org, ayla.rigouts.terryn@umontreal.ca and michael.eberle.sinatra@umontreal.ca by April 1, 2025.