Abstract
Automated decision-making algorithms are increasingly deployed and affect people’s lives significantly. Recently, there has been growing concern about systematically discriminate against minority groups of individuals that may exist in such algorithms. Thus, developing algorithms that are “fair” with respect to sensitive attributes has become an important problem.
In this talk, I will first introduce the motivation of “fairness” in real-world applications and how to model “fairness” in theory. Then I will present several recent progress in designing algorithms that maintain fairness requirements for automated decision-making tasks, including multiwinner voting, personalization, classification, and clustering.
Time
2019-05-29 10:00 ~ 11:00
Speaker
Lingxiao Huang, EPFL
Room
Room 602, School of Information Management & Engineering, Shanghai University of Finance & Economics