1995 Volume 31 Issue 3 Pages 292-298
This paper investigates responsibility allocation between human and computer in supervisory control of large-complex systems. Strategies for responsibility allocation are analyzed in a probabilistic manner by taking into account human's distrust of a computerized warning system, inappropriate situation awareness, and process dynamics. We show that it is not wise to stick to the principle, “a human locus of control is required, ” even though that is recognized as an essential principle for the human-centered automation. It is proven that responsibility allocation between human and computer should not be fixed but must be changeable dynamically and flexibly depending on the situation, which suggests the need for a new framework on human-centered automation, especially when safety of the process is a factor.