In this study, a next-generation
Kansei-equipped agricultural robot is proposed and focuses on controlling the robot through body language.
Kansei agri-robots are defined as agricultural robots equipped with
Kansei communication and robotic functions.
Kansei communication is defined as "two-way communication between humans and robots, or other machines, that supplements or supersedes traditional one-way (human-to-machine) input and operation". In
Kansei communication, humans and machines interact in ways that take into account thoughts, emotions, etc. Thus,
Kansei robotics can be defined as "robotics technology in which
Kansei communication has been made possible". In this paper, the first step towards
Kansei communication was the extraction of the outline of a human worker from the overall image observed by the robot. This was made possible in a closed environment using an algorithm combining the HSV color extraction and background finite difference methods. It was then investigated methods of controlling our
Kansei agri-robot using body language and gestures extracted from still photos of agricultural workers. For this research, a LEGO Mindstorms robot equipped with computer vision was used. Commands instructing the robot to stop, advance, reverse, etc. became possible based on the ratio of the extracted worker area to the convex hull area.
View full abstract