The purpose of this paper is to examine what ethical issues arise when AI is used to cope with bullying. First, we review domestic and international guidelines for AI and determine their limitations. Based on this, we then point out that AI used in education needs to have a high level of fairness, transparency, and accountability, and that its developers and users must pay due attention to the relationship between technology and society. With these preliminary considerations, we go on to discuss three issues that we think must be taken into consideration in using AI to cope with bullying: namely, the possibility that some students may be more likely to count as perpetrators, a necessity of strict accountability on judgements made by AI, and the need to address the fact that the definition of bullying has changed over time.
抄録全体を表示