Presented at: EC 2024

Location: Virtual

Date & Time: June 26, 2024 | 11am-1pm ET

Duration: 2 hours (two 45-minute sessions separated by a 30-minute break)

Slides: Link

Video recording: To be released


Today, machine learning and AI are becoming important tools for automating decision-making at an unprecedented rate. This has naturally raised questions regarding whether these decision-making tools treat individuals or groups fairly. While fairness is a nascent subject within the AI/ML literature, it has a long history in economics, specifically in social choice theory (and more recently, in computational social choice), where compelling mathematical notions of fairness, such as envy-freeness and the core, have played an important role in the design of fair collective decision-making algorithms.

The tutorial will spotlight a recent emerging literature on adopting fairness notions from social choice to design provably fair AI/ML tools. The advantages of such notions over fairness notions proposed in the ML literature will be discussed, and applications such as classification, recommender systems, clustering, multi-armed bandits, rankings, and federated learning will be covered.

Target audience and background requirement

The intended audience broadly includes researchers working on or interested in the topic of algorithmic fairness. The tutorial will not assume any prior knowledge of social choice theory or AI/ML: fairness notions from social choice and the AI/ML application domains will be introduced from ground up in the tutorial. As such, we envision the tutorial to be well-suited for everyone from undergraduate students interested in working on algorithmic fairness to established faculty members already working on it. Attendees can expect to walk away with knowledge of mathematical notions of fairness stemming from social choice and how to apply them to a variety of AI/ML domains.


  • Introduction. We will start by offering an overview of the fairness literature from computational social choice. Using example applications of fair division and committee selection, we will introduce three fundamental principles from social choice theory — envy-freeness, Nash social welfare, and the core — which will be covered in detail in the three parts below.
  • Part 1: Envy-freeness. Individual fairness has been studied in machine learning to capture the principle that ``similar individuals should be treated similarly''. However, enforcing that two individuals receive the same treatment can be wasteful when they have different preferences. This part will introduce envy-freeness as an appealing notion of individual fairness in the presence of heterogeneous preferences, cover its applications to the problems of classification, recommender systems, and clustering, and discuss how it can be integrated with a conventional ML notion of individual fairness to simultaneously address the issues of different individual entitlements and heterogeneous preferences.
  • Part 2: Nash social welfare. Nash social welfare (NSW) is an objective that has served as a powerful tool in computer science and economics. Maximization of NSW is often seen as a notion of fairness in itself. It is also viewed as a tool for achieving efficiency (i.e., Pareto optimality) and, in many settings, other fairness guarantees such as the core and envy-freeness. This part will discuss how this powerful tool can be applied in AI/ML applications such as multi-armed bandits, ranking, and classification.
  • Part 3: The core. Group fairness has been extensively studied in machine learning, but most prominent notions require predefining the groups of people between which fairness is sought. This part of the tutorial will investigate the core as a compelling alternative, which does not require predefined groups and provides a meaningful fairness guarantee to every possible group of people. We will study the application of this fairness criterion to the problems of federated learning and clustering.