FairVis — Discovering Biases in Machine Learning Using Visual Analytics

Alex Cabrera
5 min readSep 1, 2019

--

Over 10 million Americans go to jail every year, after which a judge decides whether to let a defendant go on bail. Often times for violent crimes or repeat offenders, a judge will deny a person bail until trial. Bail decisions are made by a judge using a variety of factors, but they have been shown to be racially biased. Machine learning (ML) has been touted as the panacea for decision making, including deciding bail, and is pitched as being more efficient, intelligent, and fair. Unfortunately, research has shown that ML models can learn and exacerbate societal biases, often performing as poorly as human judges in fair decision making.

We developed FairVis to enable users to discover which biases a machine learning model may have encoded. It allows users to quickly generate and investigate different populations of people (e.g. all Hispanic males) to check if a model is treating them unfairly. Additionally, we developed and incorporated an algorithm that automatically generates and suggests underperforming populations a user may want to examine. The interactive web-based system allows users to quickly audit their datasets and models.

Above we show the FairVis interface loaded with the COMPAS recidivism prediction dataset. Subgroups for every race/gender combination have been generated to audit the model for intersectional bias. A. Shows the distribution of instances over the dataset’s features and allows users to generate subgroups. B. Shows how subgroups perform for each selected performance measure. C. Allows for deeper exploration of a group’s makeup and performance. D. Suggests groups that are likely underperforming and calculates similar subgroups.

Machine learning models learn from human data

Machine learning and artificial intelligence systems are praised for having super-human performance and decision making ability, but they learn from extrapolating from past data. If the data used for training is biased, the model will produce biased decisions.

Let’s take the bail decision problem from above. Imagine that the dataset used to train the system is made up of defendants’ information and judge’s decisions. Using this dataset, the ML model will not learn the ground truth about who should be denied bail, but will learn to make decisions that mirror the judge’s decisions. In essence, the ML system is trained to mirror the biased decisions made by judges.

Biases can be cooked into ML models in many additional ways, including underrepresentation of data, skewed labels, and limited features. The prevalence of bias in user-generated data indicates engineers should begin by assuming there are likely biases in their models they should investigate.

Biases are often hidden and complex

While it seems straightforward to determine if a ML model is biased, it can be a surprisingly difficult, if not impossible, problem. This is primarily due to the following two reasons:

The low accuracy of intersectional groups, like blue triangles above, is hidden by the aggregate accuracy metrics of shape and color that appear more equal.

Intersectional bias. Detecting bias for simple groups is easy — split a dataset by a feature, for example gender, and see how the groups perform in relation to each other (e.g. the model has lower accuracy for women than for men). But bias is often only present in intersectional groups, for example, the group of Asian women born in Europe. If you look at all combinations of race, sex, and birthplace you may have to compare hundreds if not thousands of groups.

Impossibility of fairness theorem. Additionally, the concept of a “fair” algorithm is a nuanced issue, with some research proposing over 20 different definitions of the concept. To complicate the issue further, recent research proved the so-called impossibility theorem of fair machine learning. It finds that certain fairness definitions are incompatible and therefore it is impossible to train a model that is fair for all definitions. In these situations fairness becomes a societal question for which data scientists and the public have to make difficult tradeoffs.

Example Use Case — Detecting Bias using FairVis

Below we show a simple example of how someone could use FairVis to audit an existing domain, the COMPAS recidivism prediction model.

These snippets of FairVis show one of the many interaction paths a user can take to discover biases.

In (A), the user selects the features of race and sex to generate all subgroups of their value combinations (Caucasian male, Caucasian female, African-American male, etc.). The user is worried about the false positive rate, the percent of defendants who are wrongly denied bail, so in (B) she adds the metric to the interface and takes a look at the strip plot. She selects the groups with the highest and lowest false positive rates, African-American males and Caucasian males respectively, and investigates them further in (C). Finally, she finds that the discrepancy in false positive rates is not fully explained by differing base rates, and decides to investigate further to address the bias.

Conclusion

FairVis was developed to make discovering biases in ML models more accessible to data scientists and the general public. There has been a surge in visual analytics systems for helping people understand and develop machine learning models in the past few years. FairVis continues this line of research specifically for detecting bias in machine learning models.

We hope that FairVis is a step towards human-centric tools allowing us to develop responsible and ethical ML that is beneficial for everyone.

You can try a demo of FairVis here, or check out one of the following to learn more:

Authors

This work was conducted at Georgia Tech by

Ángel Alexander Cabrera, a PhD student at Carnegie Mellon
Will Epperson, an undergraduate student at Georgia Tech
Fred Hohman, a PhD student at Georgia Tech
Minsuk Kahng, an assistant professor at Oregon State University
Jamie Morgenstern, an assistant professor at the University of Washington
Duen Horng (Polo) Chau, an associate professor at Georgia Tech

Acknowledgements

This work was made possible by NSF grants IIS-1563816, CNS-1704701, and TWC-1526254, a NASA Space Technology Research Fellowship, and a Google PhD Fellowship

--

--