The Transactions of Human Interface Society
Online ISSN : 2186-8271
Print ISSN : 1344-7262
ISSN-L : 1344-7262
Current issue
Displaying 1-7 of 7 articles from this issue
Papers on General Subjects
  • Yuki Takeyama, Myungguen Choi, Buntarou Shizuki
    Article type: Original Paper
    2025Volume 27Issue 2 Pages 47-60
    Published: May 25, 2025
    Released on J-STAGE: May 25, 2025
    JOURNAL FREE ACCESS FULL-TEXT HTML

    3D modeling software often presents challenges for beginners because of steep learning curves. To address these issues, we developed PhoneCanvas+, a 3D sketching system designed with low installation and learning costs. The system allows users to create lines and surfaces, as well as delete 3D sketches, through mid-air hand gestures above the smartphone. These gestures mimic everyday hand movements, such as holding a pen to draw lines or moving a hand sideways to create surfaces. We conducted a user study with 12 3D modeling beginners to evaluate PhoneCanvas+ in terms of sketch objects, time, and usability. Results showed that participants completed sketching tasks in about 1 minute with about 10 minutes of practice in a simplified setup.

  • Masahiko Kato, Maki Sakamoto, Kenji Tanaka
    Article type: Original Paper
    2025Volume 27Issue 2 Pages 61-70
    Published: May 25, 2025
    Released on J-STAGE: May 25, 2025
    JOURNAL FREE ACCESS FULL-TEXT HTML

    In long period driving using a Level 2 driver assistance system, the issues of low arousal due to continuous low stimulation and dependence on the system have become apparent. In order to solve these problems, we believe that it is effective to enable drivers to enjoy driving even during driving assistance, and we have proposed an index called “driver excitement” as a measure of this feeling. The proposed method is based on combining the MOS evaluation and the SD method, but the results in the previous report had a problem that there was a difference between the MOS evaluation scores and the estimated values by our method. In this paper, we re-propose a modified estimation formula and confirm it in a Level 2 automated driving experiment over a Metropolitan Expressway using a driving simulator. The result is that the accuracy of the estimated has improved. We also examine the number of adjective pairs required for the SD method in this estimation method and propose a simple estimation method which uses only adjective pairs of the evaluation factor.

  • Mamoru Yoshizoe, Hiromitsu Hattori
    Article type: Original Paper
    2025Volume 27Issue 2 Pages 71-84
    Published: May 25, 2025
    Released on J-STAGE: May 25, 2025
    JOURNAL FREE ACCESS FULL-TEXT HTML

    In today's rapidly progressing information age, the diversi cation of individual values has led to prominent social issues arising from value conicts. We address this challenge by developing AIR-VAS, a discussion support system designed to promote mutual understanding and synergy among groups with diverse values. The core function of AIR-VAS is to facilitate awareness of others' values during group discussions. It achieves this by recognizing and sharing other group's characteristic opinions, enabling participants to explore new perspectives and dimensions to the discussion topic. This paper introduces a method for discussion scene segmentation based on sentence vectors that capture textual semantics using neural language models. By integrating the mechanism of selecting stimulus information based on scene segmentation into the system, we enhance the effectiveness of awareness support. Our experimental results demonstrate how unique ideas selected based on sentence vector distances promote diverse idea generation, providing valuable insights into fostering synergy among individuals with differing values in group discussions.

  • Takeaki Shionome, Daisuke Wakatsuki, Tatsuya Arai
    Article type: Original Paper
    2025Volume 27Issue 2 Pages 85-96
    Published: May 25, 2025
    Released on J-STAGE: May 25, 2025
    JOURNAL FREE ACCESS FULL-TEXT HTML

    Captions play an extremely important role in helping Deaf and hard-of-hearing (DHH) people to understand the content of movie contents. However, it has been pointed out that the visual burden of simultaneously viewing movie contents and captions is significant, and that it is difficult to adjust the desired playback position. Therefore, we propose a novel game-type video viewing method, and develop a new system (captioNoveL). In this study, experiments were conducted using general video viewing-type materials with captions and our novel game-type materials for DHH students. Then, we analyzed the desirability of the educational materials and their learning behavior. The results of the questionnaire showed that the participants felt that they were able to proceed at their own pace with the novel game-type materials. On the other hand, some felt difficulty in operating the materials. Individual differences were also observed in the readability of captions and the sense of fatigue. Furthermore, quantitative analysis of learning behavior showed that the novel game-type materials allowed participants to actively select the locations they wanted to refer. In addition, learning behavior in which participants skipped contents they already know was observed.

  • Yoshitaka Aoki, Shinpei Ogata, Hiroyuki Nakagawa, Kazuki Kobayashi
    Article type: Original Paper
    2025Volume 27Issue 2 Pages 97-112
    Published: May 25, 2025
    Released on J-STAGE: May 25, 2025
    JOURNAL FREE ACCESS FULL-TEXT HTML

    In recent years, multimodal interfaces have been attracting attention as they are increasingly being incorporated into off-the-shelf user interfaces (UIs) as extended UIs. However, conflicts in the operation of multiple interfaces can cause system behavior that is contrary to human expectations, but since there is no unified design idea between extended UIs and the system. It is difficult in conventional research to identify behaviors that cause conflicts. In this paper, we propose a method to detect system inconsistencies caused by the combination of a basic UI and an extended UI using model checking, which is a formal method. In the proposed method, a model representing human cognitive activity and a model representing system behavior are integrated via UI as a state transition model to create a model for inspection. We use model checking to identify whether system inconsistencies occur on this integrated model. By analyzing the process by which this behavior occurs based on the results of model checking, we can identify the cause. The case study evaluated an example of operating an air conditioner with a basic UI and an extended UI of remote controls.

  • Sanae Yamashita, Ryuichiro Higashinaka
    Article type: Original Paper
    2025Volume 27Issue 2 Pages 113-124
    Published: May 25, 2025
    Released on J-STAGE: May 25, 2025
    JOURNAL FREE ACCESS FULL-TEXT HTML

    In dialogue services, a dialogue handover, where one operator transfers the conversation to another operator to respond to the user’s requests, occurs frequently. For a smooth handover, it is essential for the operator taking over to have a proper understanding of the dialogue context. However, it is unclear what information is useful for handover. In this study, we explored what information would be useful for operators to seamlessly continue dialogues through a dialogue handover experiment. Specifically, operators were made to take notes during the dialogue with the user, and these notes were used to hand over the dialogue to another operator repeatedly. By analyzing the content of the notes that converged through this process, we investigated the useful information for dialogue handover. As a result, it was found that sequence organizations are effective for chit-chat, while key-value pairs are more suitable for task-oriented dialogues. This finding is considered useful for constructing interfaces that facilitate dialogue handover.

  • Ikkyu Nishimura, Yohei Murakami, Mondheera Pituxcoosuvarn
    Article type: Original Paper
    2025Volume 27Issue 2 Pages 125-132
    Published: May 25, 2025
    Released on J-STAGE: May 25, 2025
    JOURNAL FREE ACCESS FULL-TEXT HTML

    Each word in one language and its translation in another do not necessarily represent the same concept due to asymmetry in meanings and cultural contexts, especially for polysemous words. In recent years, as the accuracy of machine translation has improved, multilingual communication is being supported. However, this conceptual difference can lead to misunderstandings in multilingual communication. Therefore, we proposed the conceptual differences extraction in translation pairs method to quantify the concepts represented by words using conceptual dictionaries. Specifically, we used WordNet and Multilingual-WordNet, which are multilingual versions of WordNet, in our method. The concept of each word in Japanese, Chinese, and Indonesian is quantified based on the Synset, which is the smallest unit of concept in WordNet. This makes it possible to extract the concept differences among words with overlapping concepts in these languages. Consequently, our method finds 27,005 (Japanese-Chinese), 60,581 (Japanese-Indonesian), and 14,175 (Chinese-Indonesian) word pairs to be conceptually different out of 104,626 (Japanese-Chinese), 173,233 (Japanese-Indonesian), and 42,468 (Chinese-Indonesian) word pairs in WordNet.

feedback
Top