The Transactions of Human Interface Society
Online ISSN : 2186-8271
Print ISSN : 1344-7262
ISSN-L : 1344-7262
Volume 23, Issue 1
Displaying 1-13 of 13 articles from this issue
Papers on Special Issue Subject “Gaze Interface”
  • Junichi Akita
    Article type: Short Note
    2021 Volume 23 Issue 1 Pages 1-4
    Published: February 25, 2021
    Released on J-STAGE: February 25, 2021
    JOURNAL FREE ACCESS
    Eye tracking, or detecting where the user is looking at, is expected as a new type of user interfaces, with including the phenomenon of rapid eye movement, so called saccade. However, real-time tracking of saccade is difficult with the conventional image processing systems for their long processing time and latency against the speed of saccade. In this paper, we describe the design of a high speed and low latency eye tracking camera using a custom designed CMOS image sensor, as well as its evaluation results and design of the improved CMOS image sensor.
    Download PDF (6595K)
  • Toshiya Isomoto, Shota Yamanaka, Buntarou Shizuki
    Article type: Original Paper
    2021 Volume 23 Issue 1 Pages 5-18
    Published: February 25, 2021
    Released on J-STAGE: February 25, 2021
    JOURNAL FREE ACCESS
    We demonstrate a gaze-based command activation technique that is robust against unintentional command activations using a series of a dwelling and a gesture. We adopted a simple two-level stroke, which consists of a sequence of two orthogonal strokes as a gesture. To achieve robustness against unintentional command activations, we designed and fine-tuned a gesture detection system based on how users move their gaze, as revealed through three experiments.
    Download PDF (3840K)
  • Ryohei Maruyama, Sho Takahashi, Toru Hagiwara, Yoshihiro Terakura
    Article type: Original Paper
    2021 Volume 23 Issue 1 Pages 19-28
    Published: February 25, 2021
    Released on J-STAGE: February 25, 2021
    JOURNAL FREE ACCESS
    This study tries to support merging behavior by providing information about the behavior of the mainline vehicle as a next-generation Human-Machine Interface (HMI) based on Augmented Reality (AR). Also, in this paper, we verify the effectiveness of visual information on the driver using AR by analyzing the gaze. Specifically, this paper provides information on the timing in which the merging vehicle cannot merge by using AR to the driver. This information is generated based on the position and speed of the mainline vehicle. In this study, a Virtual Reality (VR) driving simulator that can measure the gaze was developed for experiments. By utilizing the VR driving simulator, driving experiments that provide various kinds of information including AR to the merging vehicle was performed. Also, by using the obtained data at the driving experiments, the driver’s gazes are analyzed. In this gaze analysis, we verify the burden on the driver of checking the mainline is reduced by the information provision. Furthermore, the timing of the merging is measured to evaluate the safety and smoothness of the merging. The experimental results suggested that providing the unsuitable timing of merging by AR enables the driver to understand easily the condition of the mainline.
    Download PDF (2842K)
  • Takuya Sarugaku, Kanji Kitahama, Mitsuho Yamada
    Article type: Original Paper
    2021 Volume 23 Issue 1 Pages 29-42
    Published: February 25, 2021
    Released on J-STAGE: February 25, 2021
    JOURNAL FREE ACCESS
    In recent years, due to international sports competitions, attention to sports has increased, and sports science research has been conducted to develop athletes who can play in international competitions. In sports science, the measurement of eye movements has attracted a great deal of attention because it can reveal the superior performance of an athlete. However, it has been difficult to measure eye movements during actual competition with conventional wired eye movement measurement devices. We developed a wireless eye movement measurement device and measured the line of sight during actual competition in various sports.
    Download PDF (6088K)
  • Takayoshi Terashita, Tetsuo Sato, Toshihiro Ogura, Kunio Doi
    Article type: Short Note
    2021 Volume 23 Issue 1 Pages 43-46
    Published: February 25, 2021
    Released on J-STAGE: February 25, 2021
    JOURNAL FREE ACCESS
    The purpose of this study is to develop an application software for evaluating skills of medical image interpretation using an eye-tracking technique. Our application software could implement the receiver operating characteristic (ROC) analysis and the ROC-type curve for task of detection and localization, the measurement of omission errors, consisting of scanning, recognition and decision-making error, and the heat-map of fixation. We tried to evaluate some participants, consisting of radiological technologists and students from radiological technology department. This application software allowed to evaluate the interpretation skills depending on user characteristics using the information of eye movements during medical image reading.
    Download PDF (1158K)
  • SHINODA Mizuki, Minoru NAKAYAMA, Izumi ITO
    Article type: Short Note
    2021 Volume 23 Issue 1 Pages 47-50
    Published: February 25, 2021
    Released on J-STAGE: February 25, 2021
    JOURNAL FREE ACCESS
    Football skills and viewing behavior of football games affect features of eye movement. This fact is analyzed by eye movements on watching a football game video using pre-defined area of interest (AOI). In order to extract dynamic viewing behavior, we define dynamic AOIs on main objects of scenes in a football game video and analyze AOI sequences generated from transition of AOIs of participants. The similarities between the AOI sequences were extracted and compared. Cluster analysis showed that the patterns of AOI sequences can enable to classify viewers into those who have football skills and those who do not.
    Download PDF (489K)
  • Ryohei Saijo, Tae Sato, Shin-ichiro Eitoku, Masahiro Watanabe
    Article type: Original Paper
    2021 Volume 23 Issue 1 Pages 51-64
    Published: February 25, 2021
    Released on J-STAGE: February 25, 2021
    JOURNAL FREE ACCESS
    Wearable devices like smart glasses and head-mounted displays are be-coming popular. Information display systems can show information requiring immediate viewing by users daily. When these systems are applied to health care services, users can receive suggestions on appropriate healthy behaviors to take at a given time. When displaying information, the annoyance that users feel should be a low, and the certainty that the information is viewed should be high; however, both are in a trade-off relationship. Thus, we propose a display method that reduces annoyance and increases the certainty that information will be viewed. We focus on the actions users perform to view information and hypothesize that actions prompted by some form of stimulation can decrease the annoyance felt in viewing information. Thus, we propose an information display method that uses visual attention guidance, in which the area of a small circle changes in brightness within the user’s peripheral vision. In an experiment with 10 participants, the method demonstrated reduced annoyance and increased certainty, indicating that it is effective and that prompting users to focus on information with this method can decrease annoyance and increase the certainty that information will be immediately focused on.
    Download PDF (4724K)
  • Seigo Kobayashi, Ayumi Sato, Masahiko Nawate, Fumihito Ito
    Article type: Original Paper
    2021 Volume 23 Issue 1 Pages 65-72
    Published: February 25, 2021
    Released on J-STAGE: February 25, 2021
    JOURNAL FREE ACCESS
    Joint attention is important for the language development of infant and often observed in interactions during shared book reading. In the observation of the joint attention, though the gaze is an important element, the analysis of the gaze has been carried out by the manual coding from the video image by experimenter until now, and it requires large labor. In order to automate this coding, we have developed a system to identify joint attention of mother and infant based on the face direction detected using machine learning from the images captured by a webcam. In this paper, we discuss gaze patterns using the system under development from the results of coding by the experimenter. And, the effect of deviation of gaze direction and face direction on the classification of gaze pattern was examined.
    Download PDF (2386K)
  • Takashi Nagamatsu, Yusuke Sugano, Kentaro Takemura
    Article type: Review Paper
    2021 Volume 23 Issue 1 Pages 73-88
    Published: February 25, 2021
    Released on J-STAGE: February 25, 2021
    JOURNAL FREE ACCESS
    The need for personal calibration has been one of the most significant technical limitations in video-based eye tracking, and there have been many research attempts on calibration-free techniques. This paper reviews calibration-free eye tracking techniques from two perspectives: person-independent gaze estimation and implicit personal calibration. Person-independent gaze estimation is the technique to infer gaze directions without considering person-dependent parameters. The methods in this category include both model-based approaches to estimate the optical axis angle and appearance-based approaches to obtain generic gaze estimation functions. Implicit personal calibration refers to the technique to adjust person-dependent parameters for better estimation accuracy without relying on any explicit calibration instruction. The methods in this category utilize various cues such as bottom-up fixation prediction, environmental/anatomical constraints, and correlation in temporal patterns to predict gaze target locations. This review summarizes representative approaches in both categories and discusses future perspectives and potential application scenarios.
    Download PDF (8497K)
  • Ikumu Kakinuma, Setsu Komiyama
    Article type: Original Paper
    2021 Volume 23 Issue 1 Pages 89-100
    Published: February 25, 2021
    Released on J-STAGE: February 25, 2021
    JOURNAL FREE ACCESS
    HMD (head mounted display) with built-in eye-tracker device has been put to practical use, and eye-gaze operation is not special in VR. However, if you use only your eyes for pointing task, there are various problems such as the problem of Midas Touch. Therefore, in this research, we made a prototype of pointing system that can switch the pointer control from eye-tracking to a controller device in VR space at user's preferred timing, and conducted a two-dimensional pointing experiment. As a result, it is found that this method is user-friendly for 2D pointing tasks in VR space.
    Download PDF (2316K)
Papers on General Subjects
  • -Comparison between Blind and Sighted People-
    Tetsuya Watanabe, Takahiro Yamazaki
    Article type: Original Paper
    2021 Volume 23 Issue 1 Pages 101-108
    Published: February 25, 2021
    Released on J-STAGE: February 25, 2021
    JOURNAL FREE ACCESS
    To find highly discriminable combinations of circular tactile point symbols on swell paper, we presented pairs of tactile circular point symbols with diameters varying between 1.0 mm and 5.0 mm in 0.5 mm steps to 12 blind and 12 sighted participants in a paired comparison experiment. The participants were asked which dot was larger or if both were the same size. The difference thresholds for the size of the circles increased as the diameter of the reference stimulus increased. They were found to range from 0.5 mm to 0.7 mm. These values are lower than the values obtained in previous research. Possible reasons for this difference were the difference in diameter steps and the removal of the data of the participants with great ambiguity. A two-way ANOVA test between the participant groups did not show a significant main effect; the blind participants did not surpass the sighted in terms of the difference threshold. The threshold values obtained in this experiment were utilized in the design of a tactile star wheel for the blind.
    Download PDF (6621K)
  • Saizo Aoyagi, Satoshi Fukumori, Michiya Yamamoto
    Article type: Original Paper
    2021 Volume 23 Issue 1 Pages 109-120
    Published: February 25, 2021
    Released on J-STAGE: February 25, 2021
    JOURNAL FREE ACCESS
    Communication robots’ impressions will be changed through long-term use in human-robot communications because of novelty effect and familiarity effect. To measure changes to the impression, we created a scale of psychological assessment by conducting a long-term experiment. We focus on characteristic nature and tool-like nature of communication robots. We used our scale to evaluate changes of the impressions by the use of Kiropi v2, RoBoHoN and iPad in another long-term experiment. Results showed that scores corresponding to characteristic nature and tool-like nature of impression will basically improve the more users use it. In addition, some impressions showed non-linear changes due to novelty effect and familiarity effect. The results also showed that the impressions differed between the robots, indicating that the impression of Kiropi v2 was the best. The psychological scale developed in this study may be useful for future developers of communication robots when they want to evaluate their impressions of robots for long-term use.
    Download PDF (2509K)
  • Yuya Ieiri, Hung-Ya Tsai, Reiko Hishiyama, Yohei Fujinoki
    Article type: Original Paper
    2021 Volume 23 Issue 1 Pages 121-134
    Published: February 25, 2021
    Released on J-STAGE: February 25, 2021
    JOURNAL FREE ACCESS
    Good tourism routes can be designed using changes in the degree of excitement of tourists. Catharsis curves describe such trends and are generally constructed from the perspective of tourism designers. However, as such trends may vary from tourist to tourist, applying catharsis curves to individual tour-plans is difficult. Individual excitement level (IEL) curves present the chronological trend of the excitement experienced by a particular tourist traveling on a certain tourist route. A catharsis curve that is appropriate for various tourists can be generated by collecting multiple IEL curves. However, there is still no efficient method for collecting many IEL curves. Therefore, in our study, we develop an IEL curve generation method that enables simultaneous collection of the generated IEL curves. Furthermore, we verify the proposed method through an experiment in Kyoto. We demonstrate that the proposed method effectively generates IEL curves when the number of times the degree of excitement increases is within the applicable range.
    Download PDF (3369K)
feedback
Top