Classified by Research TopicSorted by DateClassified by Publication Type

“How Biased are Your Features?”: Computing Fairness Influence Functions with Global Sensitivity Analysis

“How Biased are Your Features?”: Computing Fairness Influence Functions with Global Sensitivity Analysis.
Bishwamittra Ghosh, Debabrota Basu and Kuldeep S. Meel.
In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT), June 2023.

Download

[PDF] 

Abstract

Fairness in machine learning has attained significant focus due to the widespread application in high-stake decision-making tasks. Unregulated machine learning classifiers can exhibit bias towards certain demographic groups in data, thus the quantification and mitigation of classifier bias is a central concern in fairness in machine learning. In this paper, we aim to quantify the influence of different features in a dataset on the bias of a classifier. To do this, we introduce the Fairness Influence Function (FIF). This function breaks down bias into its components among individual features and the intersection of multiple features. The key idea is to represent existing group fairness metrics as the difference of the scaled conditional variances in the classifier’s prediction and apply a decomposition of variance according to global sensitivity analysis. To estimate FIFs, we instantiate an algorithm FairXplainer that applies variance decomposition of classifier’s prediction following local regression. Experiments demonstrate that FairXplainer captures FIFs of individual feature and intersectional features, provides a better approximation of bias based on FIFs, demonstrates higher correlation of FIFs with fairness interventions, and detects changes in bias due to fairness affirmative/punitive actions in the classifier.

BibTeX

@inproceedings{GBM23,
  author={Ghosh, Bishwamittra and Basu, Debabrota and  Meel, Kuldeep S.},
  title={“How Biased are Your Features?”: Computing Fairness Influence Functions with Global Sensitivity Analysis},
  abstract={Fairness in machine learning has attained significant focus due to the widespread 
  application in high-stake decision-making tasks. Unregulated machine learning classifiers can 
  exhibit bias towards certain demographic groups in data, thus the quantification and mitigation 
  of classifier bias is a central concern in fairness in machine learning. In this paper, we aim 
  to quantify the influence of different features in a dataset on the bias of a classifier. To do 
  this, we introduce the Fairness Influence Function (FIF). This function breaks down bias into 
  its components among individual features and the intersection of multiple features. The key idea 
  is to represent existing group fairness metrics as the difference of the scaled conditional 
  variances in the classifier’s prediction and apply a decomposition of variance according to 
  global sensitivity analysis. To estimate FIFs, we instantiate an algorithm FairXplainer that 
  applies variance decomposition of classifier’s prediction following local regression. Experiments 
  demonstrate that FairXplainer captures FIFs of individual feature and intersectional features, 
  provides a better approximation of bias based on FIFs, demonstrates higher correlation of FIFs 
  with fairness interventions, and detects changes in bias due to fairness affirmative/punitive actions in the classifier.
  },
  year={2023},
  month=jun,
  booktitle=FAACT,
  bib2html_pubtype={Refereed Conference},
  bib2html_rescat={Formal Methods 4 ML}
  bib2html_dl_pdf={https://arxiv.org/pdf/2206.00667.pdf},
}

Generated by bib2html.pl (written by Patrick Riley with layout from Sanjit A. Seshia ) on Sun Apr 14, 2024 11:15:51