2024 Volume 2023 Issue AGI-026 Pages 41-49
The social biases inherent in language models trained on large corpora have become a problem, leading to the development of datasets for evaluating various social biases, such as gender and racial biases. However, although many datasets were created for measuring social bi- ases, they are limited to social attributes regarding human beings. This study develops a dataset for evaluating discriminatory bias towards nonhuman animals, namely speciesist bias. By refer- encing existing English question answering (QA) datasets, we construct a Japanese QA dataset to assess speciesist biases in Japanese large language models. The experimental results reveal a tendency for some of these models to exhibit speciesist bias.