Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
34th (2020)
Session ID : 2Q4-OS-13a-01
Conference information

Can AI Discriminate in a Morally Bad Way?
Consideration on the Case of COMPAS
*Haruka MAEDA
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

The aim of this paper is to explain how algorithms can morally discriminate against humans. Discrimination by algorithmic systems has become an issue in many ethical guidelines of Artificial Intelligence. However, these guidelines do not mention the nature of the discrimination and its badness. In addition, the existing theory of discrimination supposes that the individual is a responsible subject. Taking into consideration that machine learning (a type of algorithm) is known for its unpredictable behavior, the approach to conceive the algorithms as a subject can serve us. Hence, I provide an analysis of the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) case using Hellman’s account, which can detect the badness of discrimination from the actor’s behavior. COMPAS is a typical example of unintended automated discrimination program. This provides a way to detect the degree of bias in discrimination based on the decision of algorithms.

Content from these authors
© 2020 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top