About the Series
- Date: Friday, June 8, 2018.
- Location: Kellogg Global Hub 4101, (map), Northwestern U, 2211 Campus Dr, Evanston, IL 60208.
- Transit: Noyes St. Purple Line (map).
- Parking: Validation for North Campus Parking Garage (map) available at workshop.
- Registration: Please register (not compulsory) at https://goo.gl/forms/gmSCt9dVf20ZquzE2 . Please bring your own name badge from past conference.
- 8:30-9:00: Continental Breakfast
- 9:00-9:05: Opening Remarks
- 9:05-9:45: Jon Kleinberg:
Algorithmic Fairness Criteria for Evaluating Individuals and Sets
- 9:45-9:50: Jon Kleinberg Q/A
- 9:50-10:30: Aaron Roth:
Preventing Fairness Gerrymandering in Machine Learning
- 10:30-10:35: Aaron Roth Q/A
- 10:35-11:05: Coffee Break
- 11:05-11:45: Omer Reingold:
On Algorithmic Fairness Between Groups and Individuals
- 11:45-11:50: Omer Reingold Q/A
- 11:50-12:30: Nicole Immorlica:
Fair Classification and Learning
- 12:30-12:35: Nicole Immorlica Q/A
- 12:35-1:30: Lunch
- 2:00-4:00: Student meetings with speakers
Titles and Abstracts
Speaker: Jon Kleinberg
Title: Algorithmic Fairness Criteria for Evaluating Individuals and Sets
Abstract: Recent discussion in the public sphere about classification by algorithms has involved tension between competing notions of what it means for such a classification to be fair to different groups. We consider several of the key fairness conditions that lie at the heart of these debates, and discuss recent research establishing inherent trade-offs between these conditions. We also consider a variety of methods for promoting fairness and related notions for classification and selection problems that involve sets rather than just individuals.
This talk is based on joint work with Sendhil Mullainathan, Manish Raghavan, and Maithra Raghu.
Title: Preventing Fairness Gerrymandering in Machine Learning
Abstract: The most prevalent notions of fairness in machine learning are statistical notions: they fix a small collection of high-level, pre-defined groups (such as race or gender), and then ask for approximate parity of some statistic of the classifier (like positive classification rate or false positive rate) across these groups. Constraints of this form are susceptible to (intentional or inadvertent) fairness gerrymandering, in which a classifier appears to be fair on each individual group, but badly violates the fairness constraint on one or more structured subgroups defined over the protected attributes (such as certain combinations of protected attribute values). Individual notions of fairness avoid this problem, but at the cost of needing to make often unrealistic assumptions about the setting. We propose instead to demand statistical notions of fairness across exponentially (or infinitely) many subgroups, defined by a structured class of functions over the protected attributes. We will show how to achieve this in polynomial time, under the assumption that we are given oracles for solving the unconstrained learning problems (for both the set of classifiers to be learned, and the set of subgroups to be protected). Our algorithm casts the problem as solving for an equilibrium in a zero sum game, and observes that learning oracles are enough to efficiently play no-regret learning dynamics. We then demonstrate experimentally that our proposed algorithms are practical, by investigating their performance on several real datasets when instantiated with learning heuristics in place of oracles.
This talk is based on joint work with Michael Kearns, Seth Neel, and Steven Wu.
Speaker: Omer Reingold
Title: On Algorithmic Fairness Between Groups and Individuals
Abstract: As algorithms increasingly inform and influence decisions made about individuals, it is increasingly important to address concerns that these algorithms might be discriminatory. Historically, definitions of fairness fell into one of two extremes: (1) broad-strokes statistical guarantees; (2) individual-level protections. Statistical guarantees tend to be much easier to satisfy (information and complexity theoretically), but tend to be much weaker in the protections they provide. We will discuss two recent works towards bridging the gap between statistical and individual protections. These works provide efficient learning algorithms that also ensure every subpopulation within some rich class is protected (according to some fairness notion).
One of the notions we will discuss is that of multicalibration — a new measure of algorithmic fairness that aims to mitigate concerns about discrimination that is introduced in the process of learning a predictor from data. The second notion studies fair classification within the versatile framework of Dwork et al. [ITCS 2012], which assumes the existence of a metric that measures similarity between pairs of individuals. Unlike previous works on metric-based fairness, we study the setting where a learning algorithm does not know the entire metric, but can query similarities between particular pairs of individuals a bounded number of times. We note that we do not make any assumption on the metric and are still able to obtain meaningful protection from systemic discrimination that we refer to as “metric multifairness.”
The talk will be focused on the various ways in which algorithmic fairness can be defined but will also elude to some of the ways in which it can be obtained. It is based on joint works with Úrsula Hébert-Johnson, Michael P. Kim and Guy Rothblum.
Speaker: Nicole Immorlica
Title: Fair Classification and Learning
Abstract: Classification, learning, and optimization algorithms are increasingly used to make decisions of social consequence. An automated resume reader classifies applicants, deciding which are worth interviewing. A learning algorithm estimates the impact of a drug on health outcomes. A facility location algorithm chooses where to locate new warehouses. For these algorithms to be useful, practitioners strive to maximize their accuracy and optimality. However, this can result in differential treatment across population subgroups, resulting in a sense of inequity. Fairness metrics are used to measure the degree of inequity. In this talk, we explore how to jointly optimize fairness and accuracy using, as a black box, existing algorithms that optimize accuracy or optimality. Our first solution is tailored to classification tasks and proposes decoupling the classification task, using different classifiers for each subgroup. Our second solution can optimize any smooth and continuous function of accuracy and fairness in classification, learning, and optimization settings. It further has the advantage that subgroup membership, while used to train, is not used at run time. This additional power comes with the cost of added computational complexity in training.