Objectif du cours

The goal of this short course is to familiarize the students with the different ethical, legal, and policy related considerations in machine learning.

AI’s unprecedented capabilities can foster productive dialogue, empowering individuals and societies to enhance accountability. It can also enhance inclusion and representation, supporting a thriving civic space. However, if left unchecked, AI also poses a massive threat to democratic society. For one, AI tends to perpetuate and even exacerbate inequalities due to a lack of diversity. Without adequate safeguards, AI also presents privacy and surveillance concerns, especially when personal data falls into the wrong hands. There is a worrying lack of transparency with regards to the data and algorithms being used by AI platforms and how they are being utilized to make decisions over sensitive issues in the everyday life. This makes it difficult to attribute responsibility should things go awry.

Contrary to common believes, all those risks cannot be solved by a mere technical intervention, rather, they are societal problems, that merit solutions grounded in ethical reflections and democratic values.

Therefore, in this course we will dive deeper into those societal considerations, and analyze how technology can be governed while maintaining innovation and growth. We will also aim to understand better why engineers and data scientists should be involved in the governance debate and what could be their contribution.

While the need to govern AI based systems is evident, the mode of governance remain debatable. Thus, the course will center around the three modes of governance that have been prominent in recent years. The first mode is governance through principles, the practice of setting up documents like codes of conduct, recommendations, guidelines and best practices with rather broadly defined principles; the second is sectoral governance, building on existing regulation in a certain field like medicine or finance and addressing the impact of AI on this domain; and the third is transversal overarching laws like the upcoming EU AI Act.

For each modality, the goal is to highlight the societal considerations and the risks that can be exacerbated if the technology is not governed properly. The idea is that governance as a profession, is no longer the business of lawyers and policy makers, the technical knowledge and expertise is essential for determining the proper mode of governance. Only an interdisciplinary perspective will ensure that the technology that we are developing and using is beneficial for all.

Organisation des séances

Course language : English

4 classes of 3 hours each

Session 1: AI governance intro

In this introductory session, we will start unpacking the regulatory debate, discuss why at all AI need to be regulated and what lessons we can learn from the governance journey of other emerging technologies. We will start by unpacking the trend of governing through principles. In recent years, many countries, private companies, standard bodies, and international organizations, have been calling for governing AI through principles. Those principles include for example fairness, accountability, transparency and human agency. we will analyze one of the suggested documents for the regulation of AI, and debate its efficacy.


Session 2: sectoral example

The main criticism towards the governance by principles mode is that principles and guidelines are too broad to be implemented in specific cases, and too vague to create noticeable change in the way algorithms are trained and used on a daily basis. General recommendations such as transparency, safety, and accountability are not easily transferable to either law or computer science practices. For example,  in some domains transparency could mean revealing the actual code and understanding how a certain decision about each individual was reached, while in other domains having a grasp of the general process would be sufficient. Because of that, there has been a push towards governing AI sectoraly. The argument is that governance will not hinder innovation, as there are plenty of sectors that are already heavily regulated where innovation continue to prosper.

In this session, we will concentrate on one domain, the medical domain, analyze how AI is used in this domain and discuss examples of attempts to govern the use through sectoral laws.


Session 3: transversal laws, the EU AI Act and beyond

The most ambitious attempt to govern AI is the famous EU AI Act, currently pending endorsement by the parliament. The aim of the act is to ensure that AI systems placed on the EU market and used in the Union are safe and respect existing law on fundamental rights and Union values. The proposal follows a risk-based approach and lays down a uniform, horizontal legal framework for AI that aims to ensure legal certainty.  It promotes investment and innovation in AI, enhances governance and effective enforcement of existing law on fundamental rights and safety, and facilitates the development

of a single market for AI applications.

The act goes hand in hand with other initiatives such as the Digital Marketing Act, the Digital Service Act and the Liability rules which aim to form a single set of rules that apply across the whole EU, to create a safer digital space in which the fundamental rights of all users of digital services are protected; and to establish a level playing field to foster innovation, growth, and competitiveness, both in the European Single Market and globally.

In this session we will shed light on those different regulations and assess the format of this set of transversal laws.


Session 4: foundational models

In this session, we will apply the knowledge gained in the previous sessions on the example of large language models.

foundation models trained on huge data sets using immense computing resources – open up many new possibilities for users with potentially transformative implications for how they learn, work, communicate, and find and synthesize information. However, it is already clear that these models could be associated with potential harms on an equally large scale. The well-known risks of AI related to biased and discriminatory outcomes, safety and reliability concerns, and impacts on labor markets and children and youth, among others, have grown significantly in line with the enhanced capacities of LLMs. Preliminary assessments confirm LLMs can deliver misleading, inaccurate, or false information without making this clear to the user. Their impact on science, research, education, and work is also magnified by the range of tasks the tool can perform. This adds to the list of unknowns which augment risks in the human-machine interaction.

Thus, LLMs could serve as a good example to debate whether the different governance modalities covered in the class would be helpful for this case, and how such complex topic could potentially be governed.

Mode de validation

Participation 10%; presenting in class the reading materials 30%; final assignment, writing a blog post reflecting on what was studied in the class 60%

Thèmes abordés

Ethics of artificial intelligence; AI governance; ethical considerations of large language models; regulation

Les intervenants


voir les autres cours du 2nd semestre