The Paris Conference on AI & Digital Ethics

4th edition

Call for papers

The Paris Conference is a cross-disciplinary and cross-sectoral event welcoming academics from various disciplines and stakeholders in the development of digital technologies from industry, civil society, policy and politics. Specialists from the humanities, social sciences and computational sciences are invited to combine their methods to examine the changes taking place in our societies and to steer the development of technologies towards the common good.

The Paris Conference pursues two goals. As a research conference, it aims to advance the state of the art of research on the ethical, societal, and political implications of AI and digital technologies. As a social event, it offers a space for an international community of experts to gather, foster an open dialogue on major issues underlying the development of sociotechnical systems, and collaborate to address these issues.

 

Important information

The conference will take place in Paris in June 2026. Submissions are welcome until December 20, 2025. Authors whose papers are selected will be notified in early February 2026. They will be invited to present their papers at the conference and to publish them in the Paris Journal on AI & Digital Ethics, the conference’s official academic journal.

The conference particularly welcomes transdisciplinary approaches, hybrid methods and cross-sector collaborations. It also values critical approaches that propose innovative solutions, instead of elaborating on already well-known risks and problems. Submissions should be academic in nature and format. By submitting an abstract, all contributors acknowledge that, if selected, they are expected to attend the conference in person to present their paper, and agree to provide the organisers with a full version of their paper by May 15 2026 for publication in the Paris Journal on AI & Digital Ethics.

 

Scope

Academic and industry researchers are invited to submit a contribution from the following disciplines or related disciplines.

Both fundamental research with foreseeable implications and more applied research with direct impacts on public and private organisations are welcome. Proposals should be explicitly linked to one of the four following tracks.

1 - AI and health

AI systems are increasingly used in healthcare, with strong results in analysing radiographs to suggest diagnoses, identifying signals predictive of future conditions, and discovering molecules for new treatments. It has also been argued that AI technologies can not only help to cure and repair individuals but also “enhance” them— overcoming some human bodily limitations—through a range of transhumanist interventions, such as artificial prostheses, brain–computer interfaces, and neural implants.

This track welcomes contributions that address the bioethical and societal challenges associated with AI (bio)medical technologies. Topics may include: moral dilemmas in a hospital’s resource management (e.g., during pandemics); responsibility and agency in AI- assisted medical decisions; the ethical and societal implications of expanded predictive capacities (e.g., meta-dilemmas in preventive healthcare); the use of artificial agents for therapeutic purposes, for preserving someone’s memory, or for enacting a person’s “will” after death (e.g., “deathbots”); philosophical reflections on the kinds of humans envisioned by transhumanists and posthumanists; the societal impact of building communities in which people adopt different levels of technological enhancement; and concerns related to the pursuit of immortality. We particularly encourage well-evidenced papers examining whether current-generation LLMs have contributed to a mental-health crisis and, if so, whether there are “safe” ways to deploy such technologies for human augmentation.

2 - AI and the environment

AI systems have a significant environmental footprint, which is still difficult to approximate as it should be assessed holistically, also taking into account the impacts of the alternative systems it replaces. Emerging applications also show great potential for the use of AI to help preserve the environment–e.g. tracking biodiversity, predicting climate impacts of interventions. Such analysis and logistics approaches do not require the immense overheads of generative approaches. This track welcomes contributions such as new frameworks, metrics and standards to approximate AI systems’ carbonfootprint, innovative approaches and AI-based solutions to improve the management of energetic resources and serve the protection of natural ecosystems, as well as innovative ideas and reflections to better advance this goal (e.g., psychological studies, nudges, philosophical reflections, etc.)

3 - AI and work

the capacity of generative AI systems to perform complex tasks traditionally handled by human workers might imply profound changes in the work environment, and in society at large, yet so far data evidencing wide-spread productivity impacts is lacking.

Meanwhile concerns of abuse of AI abound, ranging from the use of “modernisation” as an excuse to reduce headcounts to the increasing busywork and decreasing work quality as employees offload tedious tasks to generative AI with insufficient caution, oversight, or concern for impacts. This track welcomes contributions addressing such changes, from economic perspectives on the destruction/creation/transformation of jobs in specific industries to the sociological questions related to the evolving meaning and value of work for individuals, from the changes we may expect regarding the evolution of skills workers and managers need to adapt in this new environment. Other suggested concerns include AI- assisted tools and frameworks to improve education, or new models of “work,” wages, and realising shared global investment in the inputs and infrastructure underlying generative AI.

4. AI and societal interactions

the increasing adoption of generative AI tools by people and enterprises has resulted in a range of use cases that carry risk for users, including by fomenting deep emotional attachments and even dependency, and is potentially changing our experience of and expectations for human relationships in a variety of ways. This track welcome contributions to increase understanding of our interactions and characterise our relations with LLMs, document the cognitive and psychological impacts of affective attachment to them and anthropomorphism, suggest solutions to prevent risky interactions and examine changes in social interactions – e.g., overreliance in LLMs’ answers, cognitive changes in individual’s decision-making process, loss of critical mind, impact of AI assistance on human2human interactions.

Format

We invite researchers to submit a 400-to-500-word abstract in English, followed by a short bibliography (5 references maximum), by December 20 2025 at 11:59 p.m. CET. Abstracts should be submitted as a PDF file on the conference’s website (here) and comply with this template.

Abstracts submitted via a different channel or failing to comply with the templates’ format will not be considered.

Authors of the selected abstracts will be responsible for submitting the full version of their paper (5,000 words, ± 10%). These papers will be submitted on the conference’s website by May 15th, 2026, 11:59 p.m. CET. Contact: committee@paris-conference.com.