The Transactions of Human Interface Society
Online ISSN : 2186-8271
Print ISSN : 1344-7262
ISSN-L : 1344-7262
Volume 22, Issue 3
Displaying 1-11 of 11 articles from this issue
Papers on Special Issue Subject “Human-AI Collaboration”
  • Xiaoshun Meng, Naoto Yoshida, Xin Wan, Tomoko Yonezawa
    Article type: Original Paper
    2020Volume 22Issue 3 Pages 235-250
    Published: August 25, 2020
    Released on J-STAGE: August 25, 2020
    JOURNAL FREE ACCESS
    In this paper, we investigate a communication robot that can shows involuntary expressions appearing on its skin such as goose bumps and perspiration to explore the possibility of instinctive reactions of the robot. In communication with others, humans express not only feelings that can be recognized and expressed intentionally, but also instinctive emotions due to physiological reactions that appear reflective and involuntarily before intentional control. The human-robot communication based on the robot’s internal state may become realistic by the expressions of instinctive fear, tension, and relaxation. We verified the effectivenesses of the robot’s expressions of a) goose-bump-like emboss and b) perspiration-like water particles with the testbed robot.
    Download PDF (2308K)
  • Ying Zhong, Makoto Kobayashi, Masaki Matsubara, Atsuyuki Morishima
    Article type: Original Paper
    2020Volume 22Issue 3 Pages 251-262
    Published: August 25, 2020
    Released on J-STAGE: August 25, 2020
    JOURNAL FREE ACCESS
    Microtasks, which are tasks that can be performed online in a short period of time, have been attracting a lot of attention as a new form of work. Some online microtasks require workers to look at web pages with images. Therefore, if visually impaired people want to work on such tasks, they have to rely on alt texts attached to images on web pages to complete their online tasks. However, too many alt texts of the unrelated images may increase the difficulty of reading. This paper proposes a crowd-in-the-loop alt text addition method for online microtasks. The proposed method combines AI and crowdsourcing to add alt texts on only important images. To check how the proposed method contributes to the real world, we conducted an experiment involving 18 sighted participants (simulated visually impaired participants) and 4 visually impaired participants. In our experiment, we compare three conditions, specifically, conditions involving (1) no alt text, (2) only important images with alt text, and (3) all images with alt text in online microtasks. Workers are often required to look at other Web pages than the task instruction page to complete the tasks, such as shopping and social networking websites. Twelve tasks pertaining to 4 fake web pages were designed as the experiment tasks. We observed that the situation involving only important images with alt text could facilitate the workers’ understanding, and the worker performance in this condition was better than that in the other conditions. The proposed method is expected to improve the performance of visually impaired workers in online microtasks.
    Download PDF (2735K)
  • Yugo Hayashi
    Article type: Original Paper
    2020Volume 22Issue 3 Pages 263-270
    Published: August 25, 2020
    Released on J-STAGE: August 25, 2020
    JOURNAL FREE ACCESS
    The paper explores what factors underlying the estimation of learner self-confidence during explanations with a pedagogical conversational agent in an explanation task. This study focused on how factors such as the learner's task activities and personal characteristics can be used as useful predictors. To explore this point, this study used an web-based explanation task called WESPA (Web-based Explanation Support by Pedagogical Agent), which was run by a pedagogical conversational agent (PCA) for students in a classroom taking a lecture from psychology. 318 participants were asked to make text-based explanations to the agent in a question-and-answer (Q&A) style, and clarified a particular concept that was taught in a previous lecture in the class. Results show that an increase in the amount of actual task work for explanations and personal characteristics evaluated by AQ scores (such as social skills, attention switching, imagination) helped to predict higher self-confidence. The results show how factors of learner's task activities and personal characteristics especially about interpersonal interaction skills are useful for capturing learner's self-confidence in an online explanation task. It is also discussed how these factors could be used as predictors in future studies to automatically detect learner's confidence.
    Download PDF (607K)
  • -Verifying success rate of the app for data transfer supporter's interpretation of needs-
    Yoshiya Furukawa, Tomonori Karita, Yoshihiro Yagi, Shuichiro Senba, Ta ...
    Article type: Short Note
    2020Volume 22Issue 3 Pages 271-274
    Published: August 25, 2020
    Released on J-STAGE: August 25, 2020
    JOURNAL FREE ACCESS
    To develop a support system that interprets the needs of children with severe motor and intellectual disabilities, we developed an app that records and collects supporters' interpretation of needs and background information and verified its data transfer success rate. At a special school, we observed interactions between the supporters and children with severe disabilities and recorded the details and background information when the supporters interpreted the needs of children with severe disabilities. As a result of success rate verification, it was confirmed that the data transfer success rate of the app was enough applicative to collect learning data for need estimation system.
    Download PDF (6493K)
  • Tomonori Kubota, Takamichi Isowa, Kohei Ogawa, Hiroshi Ishiguro
    Article type: Original Paper
    2020Volume 22Issue 3 Pages 275-290
    Published: August 25, 2020
    Released on J-STAGE: August 25, 2020
    JOURNAL FREE ACCESS
    While the potential of adopting robots interacting with humans in stores was shown in the previous studies, however robots that can work with people in stores and provide services have not taken root in society yet. In this paper, we realized an android robot which can help the duties of the salespersons at a boutique as well as provide mental support by cooperating with them. We considered that it is important for such robots to not only serve the customers but also support the salespersons. Therefore, we first interviewed salespersons at a boutique to find out what they expect from the robot. The result of the survey showed that they mainly expect three points from the robot: 1) reducing the workload of salesperson, 2) providing mental support for salesperson, and 3) assisting salesperson to establish a good relationship with the customer. To deal with these issues, we implemented an onsite-operated android. Through an 11-day field experiment in a boutique, in which the salespersons used the android, it was observed that the android could contribute to the salespersons in some situation of the work, and more possibilities for the android to contribute to the salespersons in some different aspects was investigated.
    Download PDF (4592K)
  • Kan Arai, Yoshinori Saida, Hiroto Ohnishi, Daisuke Tokushima, Kei Shib ...
    Article type: Original Paper
    2020Volume 22Issue 3 Pages 291-304
    Published: August 25, 2020
    Released on J-STAGE: August 25, 2020
    JOURNAL FREE ACCESS
    We propose an HR-tech solution to promote behavioral change for self-career development. The purpose of this solution is to enable employees to consider and act toward their own career development by themselves. This solution visualizes skill information, which similar work contents groups each employee belongs to have. The work contents groups are created by machine learning methods: Word2Vec and k-means++, using 4,096 employees’ text information about work contents and work results. To evaluate whether this solution contributes to promote behavioral change for each employee, we showed the visualized words, which constructs the work contents group and skills, to 15 employees as experimental participants. The results show that this solution induces behavioral change to more than half participants. The participants, who felt strong relationship between their work contents and information of groups each participant belong to, significantly change their behavior by themselves. The ratio of the employees who change behavior is 78%. We believe that this is a valuable study case that shows the effects of grouping by machine learning and visualization for behavioral change in a real social situation.
    Download PDF (3038K)
  • Tomomi Takahashi, Kazuaki Tanaka, Kenichiro Kobayashi, Natsuki Oka
    Article type: Original Paper
    2020Volume 22Issue 3 Pages 305-316
    Published: August 25, 2020
    Released on J-STAGE: August 25, 2020
    JOURNAL FREE ACCESS
    Many Japanese tend to be embarrassed to talk to agents such as virtual assistants. This problem seems to be caused by their low social presence. Social presence refers to the degree to which one feels human-like properties from an agent. We assumed that their poor emotional expression may impair their humanness. Firstly, this study verified that adding facial expressions to flat synthetic speech could convey an agent’s emotion even if the agent’s speech can be interpreted in two ways, a positive or negative episode. As a result, the human-likeness of the agent tended to improve. This study also found that music acts as an emotional expression for an agent. Adding BGM (Background Music) and SE (Sound Effect) to a flat synthetic voice conveyed an emotion of the agent and made the agent more human and easier to talk to. Furthermore, BGM and SE could produce these effects even when added to emotional synthetic speech.
    Download PDF (734K)
  • Junichi Ishikiriyama, Kenji Suzuki
    Article type: Original Paper
    2020Volume 22Issue 3 Pages 317-328
    Published: August 25, 2020
    Released on J-STAGE: August 25, 2020
    JOURNAL FREE ACCESS
    Visually impaired persons have several burdens when applying makeup because it involves steps where visual information is required. However, many visually impaired persons are anxious about the makeup and tend to avoid the makeup activity even if they have mastered makeup techniques. We introduce a interface which confirms the makeup activity instead of the human assistant. Especially, we focus on the alert of unintentional makeup in lip area which affects the general impression of the face. We first design a novel interaction between a visually impaired person on a mobile platform. Next, we propose a processing method and the system configuration for realizing the proposed interaction. Finally, we discuss the feasibility of our method in daily makeup confirmation through experiments. In this paper, we propose an essential framework to support makeup based on the interaction using vibration and audio feedback.
    Download PDF (2897K)
Papers on General Subjects
  • – Modeling Integrative Effects of Object Color and Size -
    Haruka Yoshida, Azusa Furukawa, Teruya Ikegami, Kazuo Furuta
    Article type: Original Paper
    2020Volume 22Issue 3 Pages 329-340
    Published: August 25, 2020
    Released on J-STAGE: August 25, 2020
    JOURNAL FREE ACCESS
    This paper describes a development of a visual salience model that quantifies integrative salience of object color and size. In order to quantify the salience and to clarify the effective factors, we carried out an experiment with 16 participants. In this experiment, we designed 290 experimental conditions by combining the salience conditions of color and size. The participants evaluated visual salience of the 290 conditions by the scoring method. The experimental result showed that interaction of color and size salience was effective factors of salience as well as color and size salience respectively. The salience was formulated using these factors as an integrative salience model of color and size. The result of evaluating prediction accuracy of the proposed model by K-fold cross-validation showed that the estimated error was less than 11%.
    Download PDF (2920K)
  • Megumi Enomoto, Michiaki Sekine, Kenji Tanaka, Hiroshi Hasegawa
    Article type: Original Paper
    2020Volume 22Issue 3 Pages 341-350
    Published: August 25, 2020
    Released on J-STAGE: August 25, 2020
    JOURNAL FREE ACCESS
    To help elderly drivers avoid overlooking hazards and causing accidents, we studied a means of driver assistance that gave visual alerts in an area on a head-up display. We asked elderly drivers to drive a driving simulator around urban areas, measured their driving behavior when faced with five types of common hazards, and examined the effects of giving visual alerts in the form of a blinking, light orange triangle. Elderly drivers often fail to brake in time when faced with a hazard requiring a quick response, such as pedestrians jumping out in front of them, but the visual alerts shortened their response time. Meanwhile, with hazards in which the driver is not likely to collide but is required to negotiate carefully such as pedestrians around crosswalks, about half of the subjects failed to slow down in time, but some participants improved their driving behavior in response to the visual alerts. These findings suggest that visual alerts contribute to safe driving by elderly drivers by improving their hazard perception and leading them to drive more carefully.
    Download PDF (3623K)
  • Takashi Nishiyama, Yujiro Kose, Kohei Yamanishi, Yoshikuni Sato, Saday ...
    Article type: Original Paper
    2020Volume 22Issue 3 Pages 351-360
    Published: August 25, 2020
    Released on J-STAGE: August 25, 2020
    JOURNAL FREE ACCESS
    With regard to the decline in cognitive function associated with the aging of the elderly individuals, we newly find a feature value that is captured by the change in the amount of activity, and obtain the accuracy of the judgment. In other words, we propose a feature quantity “1 / T index” focusing on the duration T of activity, and determine the accuracy of cognitive decline in individuals. Specifically, the difference between the 1 / T index value of an individual 4 years ago and the same index value 4 years later is compared with a preset difference threshold. The target data are activity data and cognitive function evaluation values (MMSE scores) obtained from elderly individuals before and after aging for 4 years. The group that was initially cognitive healthy (with an MMSE value of 27 or higher) and an average of 75 years old is divided into two groups. One is the cognitive decline group that became cognitive unhealthy after 4 years (MMSE value of 26 or lower), and the other is the non-decline group that remained cognitive healthy after 4 years. Then, we calculate the difference between the 1 / T index values of each group before and after 4 years. As a result of ROC analysis, AUC value 0.67 was confirmed. In addition, the most appropriate difference threshold of 0.006 was obtained, and as a result of the judgment of cognitive decline, sensitivity of 64.3% and specificity of 75.0% were confirmed.
    Download PDF (2929K)
feedback
Top