Human Rights Toolbox
19
Jan
2024
09:00
26
Jan
2024
17:00
This course offers a comprehensive understanding of the interplay between advanced AI systems and the fundamental principles of human rights. Through a blend of theoretical knowledge and practical applications, you will learn to navigate the complexities of AI technology with a human-centered approach. This course is a commitment to shaping the future of technology in a way that respects and enhances human dignity.
#AIforHumanity #TechWithPurpose
LEARNING OUTCOMES
Content / Knowledge
AI systems have the potential to create harm that contradicts human rights through discriminatory outcomes
AI systems exist in complex interactions with its environment. Thus, it is essential to acquire a detailed understanding of the context in which a system will be embedded to assure that a system is in accordance with human rights
It is essential to meaningfully involve affected communities throughout the AI lifecycle: all communities that will be impacted by the system should have the agency to shape it to their needs, values, and concerns
An approach to AI development that has human rights at its core cannot be an add-on but requires reflections and actions throughout the AI lifecycle
Methodological skills
Students should be able to:
Understand the core values of human rights and how these are relevant to AI systems
Be familiar with a human rights based approach to AI development, deployment, and post-deployment updates
Internalize that technical solutions are not neutral, but instead have to be selected consciously and with the application context and affected communities in mind
Transferrable/Application
Apply the learned content to own projects, aided by training case studies and checklists
Understand how to analyze the ecosystem of values in which AI systems operate which can be applied to future projects
The course is split into two separate courses that build upon each other (total of 5 modules). It starts with a focus on human rights and gradually introduces their interrelation with data science and how a human rights-based approach can be used in practice. The first part of this course provides a comprehensive introduction to human rights and their critical role in the development of AI. You’ll gain insight into how bias and discrimination can infiltrate the AI lifecycle through real-world examples. In the final module, we’ll explore fairness concepts in AI development, and learn tools to assess and enhance fairness in AI systems.
The course is divided into five modules over the two courses:
Module 1: Human Rights and AI Systems
Understanding the basics of human rights
The significance of AI systems in the context of human rights
Module 2: How Discrimination Enters the AI Lifecycle
Identifying potential areas for bias and discrimination
Real-life examples of AI systems that have raised concerns
Module 3: Fairness Concepts & Metrics
Defining fairness
Common different fairness metrics and their advantages & disadvantages
Part 2: The second part of our two-course series, builds on the foundational knowledge from Part One, and guides you through practical implementations of a human rights-based approach in AI development. We start by exploring how to use human rights-based approaches, focusing on principles, ethical considerations and essential questions to help guide the design of those systems. In the last part, you’ll delve into a case studies, to analyze how this approach can be used in practice.
Module 4: Integrating Human Rights-based Approaches Along the AI Lifecycle
Understand how to introduce Human Rights considerations along the AI development pipeline
Use questions and reflections to understand the context of the system’s development
Module 5: Case Studies: Putting the Human Rights-based Approach Into Practice
Analyze how to apply the HR-based approach to the development of a model, illustrated on two case studies