The Fairness-Accuracy Pareto Front

25 Aug 2020  ·  Susan Wei, Marc Niethammer ·

Algorithmic fairness seeks to identify and correct sources of bias in machine learning algorithms. Confoundingly, ensuring fairness often comes at the cost of accuracy. We provide formal tools in this work for reconciling this fundamental tension in algorithm fairness. Specifically, we put to use the concept of Pareto optimality from multi-objective optimization and seek the fairness-accuracy Pareto front of a neural network classifier. We demonstrate that many existing algorithmic fairness methods are performing the so-called linear scalarization scheme which has severe limitations in recovering Pareto optimal solutions. We instead apply the Chebyshev scalarization scheme which is provably superior theoretically and no more computationally burdensome at recovering Pareto optimal solutions compared to the linear scheme.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here