The influence of AI on management and labor within organizations is steadily expanding, and accordingly, the risks and concerns associated with AI are increasingly being discussed as serious societal issues. However, among these narratives on so-called AI perils, many are based on misconceptions that diverge from empirical realities. Against this backdrop, this paper first points out that the concept of “intelligence” underpinning these AI-peril narratives is often shaped by the notion of the general intelligence factor (or, g factor). It then introduces the theory of multiple intelligences as a more accurate model for representing the structure of human intelligence. Subsequently, this paper examines four domains—namely “pattern recognition,” “empathy,” “individualized response,” and “creativity”—that are generally regarded as areas of human superiority, yet in which AI often demonstrates greater capabilities. Finally, the paper discusses three key roles that humans should fulfill in order to realize a society in which humans and AI collaborate and coexist harmoniously: namely, “supervisor,” “producer,” and “hope-holder.” In particular, it explores the essential human responsibility of continuing to uphold “hope” even in adverse circumstances, emphasizing this as a critical countermeasure against various decision-making risks that may arise from the so-called “horizon effect,” whereby AI is unable to appropriately assess events that lie beyond the bounds of its explorable domain.
View full abstract