The Transactions of Human Interface Society
Online ISSN : 2186-8271
Print ISSN : 1344-7262
ISSN-L : 1344-7262
Volume 24, Issue 3
Displaying 1-5 of 5 articles from this issue
Papers on General Subjects
  • Genya Abe, Kenji Sato, Makoto Itoh
    Article type: Original Paper
    2022 Volume 24 Issue 3 Pages 141-150
    Published: August 25, 2022
    Released on J-STAGE: August 25, 2022
    JOURNAL FREE ACCESS
    It is important to learn about the factors that affect driver’s trust in an automated driving system to maintain driver’s willingness to rely on the system. The purpose of this study was to investigate effects of differences in system response to another vehicle of automated driving on driver subjective ratings of trust in the system. A driving simulator study was implemented with elderly and and non-elderly participants (sixteen each). Two driving scenes such that an automated driving vehicle met another vehicle intending to change lane in front of the automated vehicle and intending to join driving lane in front of the automated vehicle at the Interchange were prepared in this study. Three types of control strategies were assumed as the system response to another vehicle. First is that the automated driving system gives way to another vehicle, second is that the automated driving system does not give way to another vehicle and last is that the automated driving system askes drivers to take overt the driving control (RtI: Request to Intervene). Differences in driver behaviour were also assessed when drivers were required to take driving control over from the automated driving system. The results demonstrated that trust of non-elderly drivers was more impaired when the automated driving system did not gave way for another vehicle, compared to elderly drivers. Non-elderly drivers gave way for another vehicle faster and more often by their own driving compared to elderly drivers. Relationships between driver's workload for using the automated driving and trust would be discussed in this paper.
    Download PDF (1995K)
  • Mana Sasagawa, Takayuki Itoh, Itiro Siio
    Article type: Original Paper
    2022 Volume 24 Issue 3 Pages 151-166
    Published: August 25, 2022
    Released on J-STAGE: August 25, 2022
    JOURNAL FREE ACCESS
    We propose a novel indoor item-finding system with users’less burden using the 3D position of passive RFID (Radio Frequency Identification) tags estimated by the history of tag detection and reader movement. Our proposed system assists users in finding items by showing the distance and direction to the item they want to find when any tags pre-attached in the room are detected. We consider that our system has the following two contributions. (1)The 3D position of all tags is estimated by the history of tag detection and reader movement obtained when using our system, so users do not need to input the position manually. (2)The distance and direction to the item are shown when any tags are detected, so users can find the item without approaching the item enough to directly detect the tag attached to the item. For confirming the usefulness of our system, we conducted an experiment in which users found items in an actual living space. As a result, we confirmed that our system enables users to find items in a short time with users’less subjective burden.
    Download PDF (27359K)
  • Takahisa Uchida, Tomo Funayama, Kurima Sakai, Takashi Minato, Hiroshi ...
    Article type: Original Paper
    2022 Volume 24 Issue 3 Pages 167-180
    Published: August 25, 2022
    Released on J-STAGE: August 25, 2022
    JOURNAL FREE ACCESS
    The purpose of this study is to promote relationship building between the users who meet for the first time in a three members’ dialogue: one robot and two users. It is often difficult for people who have never met each other before to talk with each other because of psychological barriers caused by mutual unfamiliarity. In this study, we developed a dialogue android that promotes relationship building between the users without speaking directly to each other. It induces the user to taking the other’s perspective by asking the user to speak for the other’s opinion. The experimental results confirmed that the proposed method promotes the relationship building between them, the sense of dialogue. It also improved the impression of the android and the dialogue with it, and the impression on the dialogue between the three persons as a whole. These results suggest that the proposed method is an effective way to promote relationship building between first-time people when androids engage in three persons’ dialogue.
    Download PDF (4680K)
  • Tomoki Kajinami, Ryuta Kobayashi
    Article type: Original Paper
    2022 Volume 24 Issue 3 Pages 181-194
    Published: August 25, 2022
    Released on J-STAGE: August 25, 2022
    JOURNAL FREE ACCESS
    In this paper, we propose an interface to teach video game players to exe cute basic movements of a striker in soccer video games. Typically, soccer video games are played from an aerial view of the field that is centered on the ball or on the character who is presently in control of the ball. However, it is difficult for novice players to learn how to efficiently move a character according to the various situations on the field, because such games generally do not interactively provide players with sufficient information on effective movement patterns. To overcome this, herein, we define multiple field situations and propose an interface to enable novice players to learn to perform elementary movement of their strikers according to the situation. We describe our implementation of an interface that instructs a player on how to move a striker by overlaying a transparent window on the game screen. The proposed interface displays shapes on the window and makes voice announcements to guide the player in the direction and location of the suggested movement. The results of an experimental evaluation show that the proposed interface enabled novice players to learn how to perform elementary movements with the striker.
    Download PDF (1322K)
  • Hideaki Takahira, Miho Shinohara, Yusuke Nosaka, Masaya Yokouchi, Mits ...
    Article type: Original Paper
    2022 Volume 24 Issue 3 Pages 195-204
    Published: August 25, 2022
    Released on J-STAGE: August 25, 2022
    JOURNAL FREE ACCESS
    We have been conducting research focusing on the human gaze as a principal biological reaction while viewing videos. In this paper, we clarify the difference in the area of gaze region under different sound conditions; monaural (1.0 ch), stereo (2.0 ch), and surround (5.1 ch). We measured the gaze of subjects viewing video scenes of natural landscapes and calculated the horizontal and vertical standard deviations of the gaze points and the area of gaze region defined from them. A two-way analysis of variance shows the main effects of each of the sound and scene conditions on the vertical standard deviation of the gaze points and the interaction effect and the main effect of the sound condition on the area of gaze region. In addition, a multiple comparison test shows the significant differences in the vertical standard deviation for the interaction effect between the sound and scene conditions and in the area of gaze region for the sound conditions between monaural and stereo or surround. The hypothesis that the variance of gaze points is expanded by the surround sound is confirmed experimentally.
    Download PDF (1001K)
feedback
Top