抄録
Patients with brain dysfunction suffer from unstable cognitive states, usually presented as loss of attention,
concentration, and memory. Such symptoms lead to difficulties in their daily lives, while rehabilitation is often time and money
consuming. The most efficient rehabilitation for them is performing daily activities, in the case of this study, cooking, which
requires certain degree of attention and possibly has various dangerous occasions. This study obtained behaviors during cooking
in egocentric vision, used a computer vision algorithm, and proposed a model to detect dangerous moments. The model handled
frames of the cooking video with YOLOv3 and Open Pose for detection of utensils and hands respectively. Lucas-Kanade optical
flow algorithm was also implemented for concentration loss detection given that the egocentric vision could provide information
of head motions. It was also trained on a video data set recorded in egocentric vision in kitchen to measure its accuracy and
performance. Two videos were tested and the result showed increased detail and diversity of this approach compared with
previous methods. However, accuracy was still under question and optimization was required.