From Pink and Blue to a Rainbow Hue! Defying Gender Bias through Gender Neutralizing Text Transformations
From Pink and Blue to a Rainbow Hue! Defying Gender Bias through Gender Neutralizing Text Transformations
Gopendra Vikram Singh, Soumitra Ghosh, Neil Dcruze, Asif Ekbal
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
AI for Good. Pages 7447-7455.
https://doi.org/10.24963/ijcai.2024/824
In an era where language biases contribute to societal inequalities, this research focuses on gender bias in textual data, with profound implications for promoting inclusivity and equity, aligning with United Nations Sustainable Development Goals (SDGs) and upholding the principle of Leave No One Behind (LNOB). Leveraging advances in artificial intelligence, the study introduces the GEnder-NEutralizing Text Transformation (GENETT) framework, addressing gender bias in text through auto-encoders, vector quantization, and Neutrality-Infused Stylization. Furthermore, we present the first-of-its-kind corpus of GEnder Neutralized REvisions (GENRE) crafted from gender-stereotyped versions. This corpus serves a multifaceted utility, offering a resource for diverse downstream tasks in gender-bias analysis. Extensive experimentation on GENRE highlights the superiority of the proposed model over established baselines and state-of-the-art methods. Access the code and dataset at 1. https://www.iitp.ac.in/~ai-nlp-ml/resources.html#GNR, 2. https://github.com/Soumitra816/GNR.
Note: Our research focuses on understanding cyber harassment conversations, especially in under-researched areas, with the exclusion of non-binary cases due to existing dataset limitations, not lack of sensitivity. We strive for inclusivity and plan to address this in future research with suitable datasets.
Keywords:
Multidisciplinary Topics and Applications: General
AI Ethics, Trust, Fairness: General