Medical Imaging and Information Sciences
Online ISSN : 1880-4977
Print ISSN : 0910-1543
ISSN-L : 0910-1543
Original Article
Classification of Gaze Movement on Interpretation of Mammogram using Deep Learning
Eiichiro OKUMURAHideki KATOTsuyoshi HONMOTONobutada SUZUKIErika OKUMURATakuji HIGASHIGAWAShigemi KITAMURAJiro ANDOTakayuki ISHIDA
Author information
JOURNAL RESTRICTED ACCESS

2025 Volume 42 Issue 3 Pages 41-48

Details
Abstract

There have been many reports on gaze movements when interpretating mammograms using eye tracking devices. If we could build an automatic gaze movement pattern recognition system that warn radiologists when their gaze movements do not match their interpretation of the image, we believe that this would be able to assist radiologists. Therefore, as an initial step in this study, we aimed to verify whether it is possible to correctly learn to heat map videos indicating gaze movement and interpretation of the image using R(2+1)D, Two stream I3D, and SlowFast. We obtained heat map videos for two mammography export radiologists and 19 mammography technologists on 8 abnormal and normal MLO and CC mammograms. The accuracy on heat map videos on MLO and CC were calculated using a 5-fold cross validation method. Among the three deep learning models for video classification, the highest accuracy was 0.69±0.03 for frame: 32 SlowFast. The sensitivity of the MLO video for TP, TN, and FP+FN was 70.0%, 72.2%, and 59.3%, respectively. The sensitivity of the CC video was 70.6%, 84.2%, and 54.4%, respectively. In the future, it is necessary to increase the number of eye movement images to improve the accuracy.

Content from these authors
© by Japan Society of Medical Imaging and Information Sciences
Previous article Next article
feedback
Top