Unlabeled Debiasing in Downstream Tasks via Class-wise Low Variance Regularization

Abstract

Language models frequently inherit societal biases from their training data. Numerous techniques have been proposed to mitigate these biases during both the pre-training and fine-tuning stages. However, fine-tuning a pre-trained debiased language model on a downstream task can reintroduce biases into the model. Additionally, existing debiasing methods for downstream tasks either (i) require labels of protected attributes (e.g., age, race, or political views) that are often not available or (ii) rely on indicators of bias, which restricts their applicability to gender debiasing since they rely on gender-specific words. To address this, we introduce a novel debiasing regularization technique based on the class-wise variance of embeddings. Crucially, our method does not require attribute labels and targets any attribute, thus addressing the shortcomings of existing debiasing methods. Our experiments on encoder language models and three datasets demonstrate that our method outperforms existing strong debiasing baselines that rely on target attribute labels while maintaining performance on the target task.


Citation

Shahed Masoudian, Frohmann Markus, Navid Rekab-saz, Markus Schedl
Unlabeled Debiasing in Downstream Tasks via Class-wise Low Variance Regularization
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 10932--10938, 2024.

BibTeX

@inproceedings{Masoudian2024LVR_EMNLP_2024,
    title = {Unlabeled Debiasing in Downstream Tasks via Class-wise Low Variance Regularization},
    author = {Masoudian, Shahed and Markus, Frohmann and Rekab-saz, Navid and Schedl, Markus},
    booktitle = {Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing},
    publisher = {Association for Computational Linguistics},
    location = {Malta},
    url = {https://aclanthology.org/2024.emnlp-main.612},
    pages = {10932--10938},
    month = {Nov},
    year = {2024}
}