In the manufacturing industry, knowledge transfer from experts to beginners is getting to be important issue because of decreasing number of experts. To support knowledge transfer, various support systems have been developed. However, it is difficult to present knowledge that matches the user's skill level. On the other hand, techniques for judging user's skill level using biological information such as EEG (Electroencephalography), HRV (Heart rate variability), and gaze have been developed. In this study, we focused on the gaze, which is easier to measure than EEG and HRV. It has been reported that the content of work and skill level were estimated using machine learning from the features of gaze. However, the features of the gaze used differ depending on the subject. In this study, we focused on gaze and pupil size to evaluate the differences between expert, intermediate, and beginner in a task of searching for vortices in fluid simulation images. As a result of comparing the gaze and pupil size of 8 experts, 8 intermediates and 8 beginners, it was found significant differences in the fixation duration, the number of fixations, and the number of gaze movements. Although there was no significant difference in pupil size, which indicates cognitive load, between experts and intermediates, both groups had larger pupil sizes compared to beginners. Experts explore vortices with high cognitive load over long periods and many gaze movement, intermediates explore vortices with high cognitive load over short periods and few gaze movement, and beginners explore vortices with low cognitive load over short periods and few gaze movement. We used Random Forest to learn the fixation duration, number of fixations, number of gaze movements, and pupil size, and classified the skill level with the accuracy of 83.1 ± 11.6% using the features with high importance. These results imply the gaze shows the difference in the users' skill levels and show the prospect of presenting appropriate knowledge to the user of the design support system by using the gaze-measurement result.
View full abstract