Send email Copy Email Address
2023-10-28

Learning to Walk Impartially on the Pareto Frontier of Fairness, Privacy, and Utility

Summary

Deploying machine learning (ML) models often requires both fairness and privacy guarantees. Both objectives often present notable trade-offs with the accuracy of the model—the primary focus of most applications. Thus, utility is prioritized while privacy and fairness constraints are treated as simple hyperparameters. In this work, we argue that by prioritizing one objective over others, we disregard more favorable solutions where at least certain objectives could have been improved without degrading any other. We adopt impartiality as a design principle: ML pipelines should not favor one objective over another. We theoretically show that a common ML pipeline design that features an unfairness mitigation step followed by private training is non-impartial. Then, parting from the two most common privacy frameworks for ML, we propose FairDP-SGD and FairPATE to train impartially specified private and fair models. Because impartially specified models recover the Pareto frontiers, i.e., the best trade-offs between different objectives, we show that they yield significantly better trade-offs than models optimized for one objective and hyperparameter-tuned for the others. Thus, our approach allows us to mitigate tensions between objectives previously found incompatible.

Conference Paper

Conference on Neural Information Processing Systems (NeurIPS)

Date published

2023-10-28

Date last modified

2024-12-02