The Transactions of Human Interface Society
Online ISSN : 2186-8271
Print ISSN : 1344-7262
ISSN-L : 1344-7262
Volume 17, Issue 1
Displaying 1-7 of 7 articles from this issue
Papers on Special Issue Subject "IIKAGEN-na-interface (well-moderated interface)"
  • Junko Itou, Yuichi Motojin, Jun Munemori
    Article type: Original Paper
    2015Volume 17Issue 1 Pages 1-14
    Published: February 25, 2015
    Released on J-STAGE: July 01, 2019
    JOURNAL FREE ACCESS

    In this research, we propose a visual interface for a chat system by adopting the Japanese Manga style. Characters in graphical chat systems can express various visual information. However, it is very troublesome for users to choose manually various emotions at each message during their chat. Therefore, we focus on the the Japanese Manga style. We make an attempt to reduce users' burdens without detriment of visual expressiveness. From the result of the comparison experiment, we cleared that the proposed system was highly evaluated than the comparison system in terms of readability.

    Download PDF (3042K)
  • Shigeo Yoshida, Takuji Narumi, Sho Sakurai, Tomihiro Tanikawa, Michita ...
    Article type: Original Paper
    2015Volume 17Issue 1 Pages 15-26
    Published: February 25, 2015
    Released on J-STAGE: July 01, 2019
    JOURNAL FREE ACCESS

    Our research goal is to manipulate human emotions and to virtually evoke various emotional experiences. It is challenging to directly manipulate human emotions within the conventional approach of human-computer interaction. To break through this difficulty, we propose a new method for virtually evoking emotions with an integration of the knowledge of cognitive science and virtual reality. Psychological studies have revealed that the recognition of changes within bodily responses unconsciously evokes an emotion.We therefore hypothesized emotion could be manipulated by having people recognize pseudo-generated bodily reactions as changes to their own bodily reactions. In this paper, we focus on the effect of facial expressions on evoked emotion. We developed a mirror-like system that manipulates emotional states by giving the feedback of deformed facial expressions in real time. The user studies clarified that the feedback of deformed facial expression can manipulate emotional experiences; not only positive and negative affect, but also preference decisions. Moreover, user's actual facial expressions were also changed corresponding to the feedback of deformed facial expressions.

    Download PDF (6419K)
  • Kazuki Takashima, Takuya Sato, Tokuo Yamaguchi, Shigeru Tatsuzawa, Mas ...
    Article type: Original Paper
    2015Volume 17Issue 1 Pages 27-38
    Published: February 25, 2015
    Released on J-STAGE: July 01, 2019
    JOURNAL FREE ACCESS

    This paper reports an empirical evaluation of interactive and flexible dynamic display methods for large image collections. We have developed two types of display methods based on the dynamic image layout algorithm; D-Flip is designed for an enjoyable image browser and SWINGNAGE is for a digital signage system. Both methods allow users' inputs for selecting images and gathering related images while keeping high visibility of the image collections on the screen. We compared them with two baseline display methods, slideshow and thumbnail in a controlled experiment with subjective and objective evaluation indexes. Results show that D-Fllip and SWINGNAGE provide reasonable visibility for large image collections and form enjoyable and powerful impressions. We discuss future image browser and digital signage systems based on the obtained findings.

    Download PDF (6473K)
  • Naoya Isoyama, Tsutomu Terada, Masahiko Tsukamoto
    Article type: Original Paper
    2015Volume 17Issue 1 Pages 39-52
    Published: February 25, 2015
    Released on J-STAGE: July 01, 2019
    JOURNAL FREE ACCESS

    Recently, the environment that user can always see the information is becoming realistic by the development of the wearable computing, and researchers have proposed various methods, which present information appropriately according to a situation after recognizing the user's action or the state. However, in the environment, since the user does not always need the particular information, free time arises on the device for presenting information and the user can see the information that does not have a specific meaning at this time. Therefore, in this research, we propose the system that unconsciously leads the user to the specific information by presenting the visual information that is related to user's interest. Our system utilizes the concept of priming effect, which means the user is affected by the contents presented on the system. Evaluation results confirmed that the related information affects user getting the information. Moreover, we implemented a prototype of the system that presents information related to user's interest.

    Download PDF (1831K)
  • Musashi Nakajima, Hidekazu Saegusa, Yuto Ozaki, Yoshihiro Kanno
    Article type: Original Paper
    2015Volume 17Issue 1 Pages 53-62
    Published: February 25, 2015
    Released on J-STAGE: July 01, 2019
    JOURNAL FREE ACCESS

    The design principle of Tangible User Interfaces has been applied to musical interfaces in order to realize more accessible usage of computers for musical expression. In this paper, we propose DropNotes, a tangible tabletop user interface, that enables intuitive sound source sampling and audio processing. DropNotes is comprised of colored water, bottles, a dropper, a funnel and a glass table. Users instill a sound source into colored water in a bottle and put its droplets onto a glass table to playback the recorded sound. The composition of droplets determines pitch, volume and sequence of allocated sounds. Users accomplish the complicated task of recording and arranging music by manipulating familiar tangible artifacts. The paper also reports the results of a usability test that compared DropNotes to a prevalent digital audio workstation in order to validate the concept. The results indicate that DropNotes allows users to more readily sample a sound source and that even novices can intuitively compose music.

    Download PDF (3349K)
Papers on General Subjects
  • Kenji Isatake, Satoshi Hukumori, Akio Gohuku, Kenji Sato
    Article type: Original Paper
    2015Volume 17Issue 1 Pages 63-72
    Published: February 25, 2015
    Released on J-STAGE: July 01, 2019
    JOURNAL FREE ACCESS

    A VR based mirror visual feedback (VR/MVF) therapy system for chronic pain was developed and has been used in a hospital. However, the system is expensive and utilizes contact-type sensor devices resulting in the possibility to provoke a pain at their usage. A therapy system that a patient can use himself/herself at home is required because of the necessity of frequent treatment for a patient and much data for the advancement of treatment method. This study develops a simple VR/MVF therapy system by considering design requirements obtained (1) from the requests of patients and medical staffs and obtained from the viewpoints of (2) human interface, (3) body motion measurement, and (4) reduction of pain. The developed system measures the motion of upper limb by Kinect and initiates the grasping motion of the artificial hand with pain side by a mouse click of the hand of intact side. A smoothing filter for the measured data of arm motion is implemented to satisfy with high response at rapid arm motion and less fluctuation of artificial arm at low and no arm motion. The applicability of the developed system is confirmed by several experiments and trial usage by a patient.

    Download PDF (3013K)
  • Kazuya Murao, Tsutomu Terada
    Article type: Original Paper
    2015Volume 17Issue 1 Pages 73-84
    Published: February 25, 2015
    Released on J-STAGE: July 01, 2019
    JOURNAL FREE ACCESS

    In the area of activity recognition with mobile sensors, a lot of works on context-aware systems using accelerometers have been proposed. Especially, mobile phones or remotes for video games using gesture recognition technologies enable easy and intuitive operations such as scrolling browser and drawing objects. Gesture input has an advantage of rich expressive power over the conventional interfaces, but it is difficult to share the gesture motion with other people through writing or verbally. Assuming that a commercial product using gestures is released, the developers make an instruction manual and tutorial expressing the gestures in text, figures, or videos. Then an end-user reads the instructions, imagines the gesture, then perform it. In this paper, we evaluate how user gestures change according to the types of the instruction. We obtained acceleration data for 10 kinds of gestures instructed through three types of texts, figures, and videos, totalling 44 patterns from 13 test subjects, for a total of 2,630 data samples. From the evaluation, gestures are correctly performed in the order of text→figure→video. Detailed instruction in texts is equivalent to that in figures. However, some words reflecting gestures disordered the users' gestures since they could call multiple images to user's mind.

    Download PDF (805K)
feedback
Top