2026 年 62 巻 1 号 p. 31-40
This study investigates the safety evaluation of service robots in public spaces, addressing both traditional physical risks and broader concerns such as algorithmic bias and unfair user treatment. Conventional safety assessment methods primarily focus on mitigating physical risks. However, service robots, which interact with diverse users, may inadvertently exhibit biased behaviors that are difficult to detect using traditional approaches. To analyze these issues, we employ a virtual case study of a library guide robot, assessing collision risks and fairness concerns using STAMP/STPA, a risk assessment method suitable for interactive systems. The results indicate that this approach successfully identifies risk factors contributing to biased behavior.