AI technology are trained using large amounts of linguistic data, which isn’t completely accurate and reflective of the real world. If the data that the AI is trained on, contains errors, biases, or prejudices, the AI may be affected by them. Moreover, as a result of learning from chatting with humans, AI may generate information and context that is not present in the training data, which can lead to hallucination.
In relation to the uncontrollability of AI technology, people are forced to face unknown risks. When people perceive unknown risks, the prevalence of insecurity and the desire for safety increases in society. In the relationship between risk control and law, discussion will be concentrated on using law to regulate the risk that contains the possibility or danger
of causing large harm.
If an AI makes a statement that is defamatory or insulting, the AI is an instrument for human beings and its creator as human beings is charged with the crime. Even if the creator has no such intention, the creator may be held liable for negligent omission as a failure of the creator’s duty of supervision for the infringement or risk to the legal interest caused by the AI. Exactly, If a corporation or other legal entity is not in compliance when applying AI technology,
its legal liability becomes an issue.
Therefore, we will examine AI in terms of ex post and ex ante regulation and responsibility on crime.
View full abstract