Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
37th (2023)
Session ID : 1K5-OS-11b-04
Conference information

What's wrong with treating a large language model as an agent?
*Katsunori MIYAHARA
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

June 2022, a Google engineer claimed that their Language Model for Dialoguel Applications (LaMDA) is sentient and deserves to be treated as an agent. Google rejected the claim and many supported the decision. There are several reasons to contest the ascription of agency to large language models (LLMs). (1) Intrinsic reasons. LLMs cannot be conscious or intentional. (2) Consequential reasons. Treating LLMs as agents can lead to diverting public attention away from more important issues. (3) Reasons concerning individual well-being. Treating LLMs as agents can aggravate an individual’s social isolation. I examine each consideration in turn and argue that it’s harder than one might think to decisively conclude that we should not treat LLMs as agents. Drawing on extant debates on the moral status of robots and fictophilia (love for fictional characters), I will also specify the key issues for considering the legitimacy of agency ascription to LLMs.

Content from these authors
© 2023 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top