A recent flurry of research activity has attempted to quantitatively define "fairness" for decisions based on statistical and machine learning (ML) predictions. In this talk, we will first explicate the various choices and assumptions made---often implicitly---to justify the use of prediction-based decisions. Next, we show how such choices and assumptions can raise concerns about fairness and we present a notationally consistent catalogue of fairness definitions from the ML literature. In doing so, we hope to start a conversation about the choices, assumptions, and fairness considerations of prediction-based decision systems.
Shira Mitchell is a statistician working in politics. After her PhD at Harvard and postdoc at Columbia, she worked at Mathematica Policy Research on small area estimation and causal inference for federal agencies (mostly Medicare and Medicaid). She then worked at the NYC Mayor's Office of Data Analytics (MODA), deploying and critiquing data-driven policy. Now she works at Civis Analytics on the company's political research team in NYC.