Abstract
This paper, taking into account recent regulatory trends and previous research, uses AI technology
as an example to outline the AI Act, which serves as Japan's basic AI law, and to identify essential issues in uncertainty administration and the legal system. This paper also categorizes the "risk" and "uncertainty" of AI technology. It examines the significance, challenges, and applicability of the "standardization" of legal precautionary measures surrounding advanced science and technology risks that involve "complex uncertainties," including AI technology. Specifically, this paper proposes the problem structure of "strategic uncertainty" and four new "standardizations" of legal precautionary measures corresponding to each category: "standardization of management," "standardization of regulation," "standardization of prevention," and "standardization of precaution." In particular, it offers a new perspective on the importance of the criteria (boundaries) for drawing the line between each category and the "fluctuations" between them in policy design.