情報法制研究
Online ISSN : 2432-9649
Print ISSN : 2433-0264
ISSN-L : 2433-0264
特集1 生成AI と人権
AI による差別発言?
金 尚均
著者情報
ジャーナル 認証あり

2024 年 16 巻 p. 002-013

詳細
抄録
 AI technology are trained using large amounts of linguistic data, which isn’t completely accurate and reflective of the real world. If the data that the AI is trained on, contains errors, biases, or prejudices, the AI may be affected by them. Moreover, as a result of learning from chatting with humans, AI may generate information and context that is not present in the training data, which can lead to hallucination.
In relation to the uncontrollability of AI technology, people are forced to face unknown risks. When people perceive unknown risks, the prevalence of insecurity and the desire for safety increases in society. In the relationship between risk control and law, discussion will be concentrated on using law to regulate the risk that contains the possibility or danger of causing large harm.
If an AI makes a statement that is defamatory or insulting, the AI is an instrument for human beings and its creator as human beings is charged with the crime. Even if the creator has no such intention, the creator may be held liable for negligent omission as a failure of the creator’s duty of supervision for the infringement or risk to the legal interest caused by the AI. Exactly, If a corporation or other legal entity is not in compliance when applying AI technology, its legal liability becomes an issue.
Therefore, we will examine AI in terms of ex post and ex ante regulation and responsibility on crime.
著者関連情報
© 2024 情報法制学会
前の記事 次の記事
feedback
Top