Abstract
Societal biases are reflected in large pre-trained language models and their fine-tuned versions on downstream tasks. Common in-processing bias mitigation approaches, such as adversarial training and mutual information removal, introduce additional optimization criteria, and update the model to reach a new debiased state. However, in practice, end-users and practitioners might prefer to switch back to the original model, or apply debiasing only on a specific subset of protected attributes. To enable this, we propose a novel modular bias mitigation approach, consisting of stand-alone highly sparse debiasing subnetworks, where each debiasing module can be integrated into the core model on-demand at inference time. Our approach draws from the concept of diff pruning, and proposes a novel training regime adaptable to various representation disentanglement optimizations. We conduct experiments on three classification tasks with gender, race, and age as protected attributes. The results show that our modular approach, while maintaining task performance, improves (or at least remains on-par with) the effectiveness of bias mitigation in comparison with baseline finetuning. Particularly on a two-attribute dataset, our approach with separately learned debiasing subnetworks shows effective utilization of either or both the subnetworks for selective bias mitigation.
Citation
Lukas Hauzenberger,
Shahed
Masoudian,
Deepak
Kumar,
Markus
Schedl,
Navid
Rekab-saz
Modular and On-demand Bias Mitigation with Attribute-Removal Subnetworks
Findings of the Association for Computational Linguistics: ACL 2023,
31st:
6192--6214, doi:10.18653/v1/2023.findings-acl.386, 2023.
BibTeX
@inproceedings{Hauzenberger2023ModularDebiasing_ACLFindings_2023, title = {Modular and On-demand Bias Mitigation with Attribute-Removal Subnetworks}, author = {Hauzenberger, Lukas and Masoudian, Shahed and Kumar, Deepak and Schedl, Markus and Rekab-saz, Navid}, booktitle = {Findings of the Association for Computational Linguistics: ACL 2023}, publisher = {Association for Computational Linguistics}, location = {Toronto, Canada}, doi = {10.18653/v1/2023.findings-acl.386}, url = {https://aclanthology.org/2023.findings-acl.386}, volume = {31st}, pages = {6192--6214}, month = {July 2023}, year = {2023} }