International Journal of Activity and Behavior Computing
Online ISSN : 2759-2871
Analysis of the changes in the attitude of the news comments caused by knowing that the comments were generated by a large language model
Nanase Mogi Megumi YasuoYutaka MorinoMitsunori Matsushita
著者情報
ジャーナル オープンアクセス

2025 年 2025 巻 3 号 p. 1-13

詳細
抄録
This study examined the attitudes of individuals toward texts generated by large language models (LLMs), including social networking service posts and news comments. Recently, the number of people viewing texts generated by LLMs has increased. Because an LLM can generate natural texts that are almost indistinguishable from those written by humans, there is concern that generating such natural texts may cause problems, such as maliciously influencing public opinion. To evaluate the reception of LLM-generated texts, we conducted an experiment based on the hypothesis that the knowledge that a text was generated by an LLM would influence user acceptance. In the experiment, participants were shown news comments that included AI-generated comments. We controlled whether the user was aware that the text had been generated by an LLM, and assessed their viewpoints from four perspectives: perceived friendliness, trustworthiness, empathy, and reference. The results showed that a generated comment imitating the opinion of an expert increased in rank when it was disclosed that the LLM generated the comment. In particular, “reliability” and “informative” were sensitive to this disclosure, whereas “familiar” and “empathy” were not. This result suggests that expert labeling significantly enhances perceived reliability, and the finding raises concerns about the potential for news viewers to be implicitly guided toward a particular opinion.
著者関連情報
© 2025 Author

この記事はクリエイティブ・コモンズ [表示 4.0 国際]ライセンスの下に提供されています。
https://creativecommons.org/licenses/by/4.0/deed.ja
前の記事 次の記事
feedback
Top