DualFair: Fair Representation Learning at Both Group and Individual Levels via Contrastive Self-supervision

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 45
  • Download : 0
Algorithmic fairness has become an important machine learning problem, especially for mission-critical Web applications. This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations. Unlike existing models that target a single type of fairness, our model jointly optimizes for two fairness criteria—group fairness and counterfactual fairness—and hence makes fairer predictions at both the group and individual levels. Our model uses contrastive loss to generate embeddings that are indistinguishable for each protected group, while forcing the embeddings of counterfactual pairs to be similar. It then uses a self-knowledge distillation method to maintain the quality of representation for the downstream tasks. Extensive analysis over multiple datasets confirms the model’s validity and further shows the synergy of jointly addressing two fairness criteria, suggesting the model’s potential value in fair intelligent Web applications.
Publisher
ACM
Issue Date
2023-04-30
Language
English
Citation

WWW '23: The ACM Web Conference 2023, pp.3766 - 3774

DOI
10.1145/3543507.3583480
URI
http://hdl.handle.net/10203/314917
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0