Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
38th (2024)
Session ID : 2T6-OS-5c-01
Conference information

Unlearning Bias and Toxicity in Large Language Models
*Huimin LUMasaru ISONUMAJunichiro MORIIchiro SAKATA
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Large language models (LLMs) often inherit biases from vast amounts of training corpora. Traditional debiasing methods, while effective to some extent, do not completely eliminate memorized biases and toxicity in LLMs. In this paper, we introduce a novel approach to debiasing in LLMs based on unlearning techniques by performing gradient ascent on hate speech against minority groups, i.e. minimizing the likelihood of biased or toxic content. Specifically, we propose a mask language modeling unlearning technique, which unlearns the harmful part of the text. This method enables LLMs to selectively forget and disassociate from biased and harmful content. Experimental results demonstrate the effectiveness of our approach in diminishing bias while maintaining the language modeling abilities. Surprisingly, the results also unveil an unexpected potential for cross-domain transfer unlearning: debiasing in one bias form (e.g. gender) may contribute to mitigating others (e.g. race and religion).

Content from these authors
© 2024 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top