Although many studies have addressed the issue on tactile perceptual space, very few studies have paid attention to individual differences. In this study, we propose a system to visualize individual differences in tactile perceptual space using a map of onomatopoeia. The system visualizes the relationship on the map between onomatopoeic words that can express tactile sensations and basic tactile materials. In addition, it allows users to move onomatopoeic words to locations they feel more appropriate on the map. As a result, the difference in the configuration of the words on the map shows the individual differences of how tactile sensations are categorized. This system can be used for visualizing trends of differences in tactile perception, for example, between young and old people, and between male and female.
In recent years, with the increase in number of single elderly people, the deterioration of everyday life due to the decrease of motor functions has been recognized as an important issue. In order to live an independent life, dexterous motions using fingertips are needed for activities such as changing clothes and having meals. In this study, we focused on the pinch motion of index and thumb which were the most used fingers during actions like picking up an object. We have measured and analyzed the posture and the endpoint of finger palmar of distal phalanx. As the result, we found that compare to younger people, the endpoints were located 4mm closer towards the fingertips and the control method for opposition of thumb was simplified for elderly people.
Typing keyboard is a very common activity, but is difficult for the people with hand tremor. Because hand tremor featured with involuntary hand shaking affects precise hand movements and finger control, and most of keyboards have small keys and close key arrangement, tremor hands usually fail to type the desired key of keyboard. In this paper, we proposed a typing assisted system to support hand tremor sufferers correctly typing with the ordinary physical keyboard under the condition of hand tremor movements. The system includes the novel technologies of finger stabilization and virtual key remapping, which contribute to estimate user's wanted key from finger involuntary shaking and to make sure the key correctly input in the case of finger actually touching the wrong key of keyboard. The experiment results showed that tremor hand typing with the proposed system can significantly reduce the input error rate and the time wasted on correcting error input, comparing with tremor hand typing keyboard directly.
Tongue and mouth related muscle are essential to keep a person's Quality of Life (QoL). However, those muscles are often declined due to aging and/or a certain kinds of disease such as Down Syndrom. That leads variety of symptoms such as speech and swallowing disorders. To comply with this issue, many kinds of tongue-mouth muscle training methods have been proposed. However, most of them are monotonic and have a problem with the continuity. In this research, we develop SITA (Simple Interface for Tongue motion Acquisition) system, which enables measuring tongue motion without wearing any device. This measuring system also enables to develop interactive training game for tongue and mouth muscle. In this paper, we describe the detail of SITA system and tongue muscle training game.
The oral muscle weakness gives a bad influence to the usual life activities such as swallowing and utterance. However, the conventional training methods in use to ease these problems are monotone and therefore result in the discontinuity of training. This research adds a new function - mouth shape recognition - to SITA (a Simple Interface for Tongue motion Acquisition) and uses it to develop “Squachu” application. Squachu is a sports game using player's mouth not only for seniors but also for everyone regardless of their gender and age. This paper performed the experiment that 4 seniors played Squachu for approximately 1 week. As a result, almost participant's evaluated values of RSST and oral diadochokinesis tends to improve. Also it appeared that participants had positive attitude towards playing Squachu. These results suggest that Squachu can be one of the training for helping oral muscle. This paper describes the system configuration and presents the evaluation results of Squachu training effects on seniors.
Crowdsourcing for social contribution (supporting people with disabilities, public libraries) gained popularity as a new form of social participation. It is effective for people who have trouble working full-time, like the elderly. However, it is a challenge to motivate workers to maintain worker community. We propose a two-dimensional worker motivation model for socially conscious crowdsourcing, and develop a crowdsourcing system according to that model. 537 participants conducted more than 17 million micro-tasks in two years. We analyze workers' motivation based on their activities and questionnaire-based surveys. We divide workers into groups to find out which gamification elements or community functions efficiently motivate each worker groups.
In the Japanese aging society, most of the elderly people are still active enough to work and have the possibility to become essential labor forces. Therefore, the job matching method that can allocate their unique abilities is required. However, the current job matching relies on each recruiter's tacit knowledge, and recruiters can assign workers only from limited number of candidates that they can handle. In this paper, we propose an interactive job matching system that can reflect the recruiter's tacit knowledge and help searching diverse elderly workers for each work profile. The results indicate that interactions of the proposed system can improve the recruiters' searching efficiency and matching diversity by retrieving their tacit knowledge during the search process. The proposed interactive job matching system becomes more effective, if the job offer is not clearly described in the texts.
In a hyper-aged society, health promotion for middle-aged and older people—especially women—became a serious social issue. Health promotion is highly important from the viewpoint of reducing healthcare cost and improving QOL. Among a variety of health promotion approaches, we especially focused on health promotion by exercises. Resistance training is regarded as one of the effective health promotive exercises. However, resistance training will not effectively work if the training is done with wrong postures and speed. In this paper, we propose a VR system that supports to perform correct exercises without relying on training instructor. Specifically, we developed a visualization system that superimposes the postures of the training instructor and trainee in a 3D virtual environment and helps the trainee to learn resistant training in the right manner.
Assistive instruments such as slopes and textured paving blocks are installed for helping elderly, visually and/or physically impaired people, who have any inconvenience to move outside. Though these precipitous situations of accessibility progress affect their migration pathway for their destination, up-to-date accessibility information is difficult to gain quickly because of local information disclosure. It is necessary to develop a comprehensive system which appropriately acquires and arranges scattered accessibility information and then presents the information intuitively. Thus, our final goal is to develop a social platform which can obtain and present the information depending on users' conditions and situations including users' disorders and places, and can share the information provided by users. Particularly in this paper, we analyzed the characteristics of shared information obtained by assessment and crowdsourcing for improving quality and quantity of accessibility information. Results indicated that different tendency of character counts among accessibility conditions was observed in the assessment and the crowdsourcing conditions.
Conventional electrolarynx supplies sound into a vocal tract instead of vocal cords by pushing a button. However, hands-free electrolarynx which enables intonation control have been requested by many users. In order to develop that, the problem of a gap of a vibrator in use and shortage of volume due to insufficient pressure of vibrator needs to be solved. Then, we developed the speech amplifier for the voice uttered by electrolarynx. And, we produced the prototype of the total system which unified the speech amplifier, the intonation control sensor, and the vibrator. This paper describes the development process and describes elements required for development of hands-free electrolarynx from the viewpoint of dynamics and usability.
Although many computer games had become diversified in recent years, a lot of effort and ingenuity is needed to produce games that persons with a total visual impairment can enjoy. On the other hand, some games for visually impaired persons have been developed. However, games that use only auditory information present challenges for sighted persons. Unfortunately, no games exist that both sighted and visually impaired persons can enjoy together. It is difficult for visually impaired persons to play the same game with sighted persons and for sighted and visually impaired persons to share a common subject. In this paper, we aim to develop a accessible action roll playing game (RPG) that both sighted and visually impaired persons can play using their dominant senses including visual, auditory and tactile senses. To develop the game, we also develop a field creation tool for a game developer with visual impairments, and provided an integrated game development environment for them. In this paper, we describe the development and reflections of the accessible action RPG and our game development environment.
In this study, a novel tactile device was designed for assisting the “seeing”, “hearing”, and “speaking” functions by using a 2-D tactile display and a tactile matrix sensor for the blind, the deaf and the deafblind. In the interface, the directions are sensed by an electronic compass inside the mobile phone, which transmits signals to the tactile display consisting of 32 piezoelectric devices. In addition, environmental sounds or speech sounds can be detected by a microphone attached to the mobile phone. The time spectral patterns of sounds can be represented on the tactile display. Further, the tactile information obtained by touching and tracing the tactile matrix sensor can be displayed onto a receiver's fingertip thorough the mobile phone. These functions are useful for users with sensory disabilities such as the blind, the deaf, and the deafblind and enable them to communicate with the normal people.
In order to minimize national expenditure dedicated to providing support to the elderly including social security and medical care, it is necessary to reduce the cost of treatment. Current prophylactic approaches mainly include training programs tailored towards seniors, who may be assisted by caregivers, for wellness maintenance and enhancement. However, these approaches are mainly administered by volunteers, who are often overburdened because of labor shortages. Thus, it is necessary to design and implement a system that enables seniors to maintain and improve their health by themselves. In this paper, we propose and test a smartphone-based gait measurement application. Our results indicate that the mobile application can help motivate seniors to walk more regularly and improve their walking ability. Moreover, we found in our experiments that since our application helped improve our senior participants' physical fitness, some of them became interested in participating in social activities and using new technologies as a consequence.
Listening music with mobile devices is now a part of our daily life. With the aim of generating vibration-based feedback to enrich musical listening experiences with mobile devices, we have applied a frequency shifting method, which was proposed previously in the literature for mixer manipulation or cross-modal relationship between tactile and auditory stimuli. Experimental results showed that the proposed method significantly increased the listener's evaluation of sound consisting of high-frequency components.
Earthquakes could cause enormous human casualties, and it is crucial to reduce their risks through anti-earthquake procedures. One important approach to enhance the anti-earthquake procedures is to arouse a fear of earthquakes. Immersive earthquake experience systems based-on virtual reality (VR) technology have been developed to serve this purpose but the contents need to be created manually by artists, that limits its practicability. For this study, a new earthquake experience system is proposed in which indoor environments are 3D-scanned, and virtual environments are automatically created. It can be expected that the proposed system reduces the cost of the VR-based earthquake experience system, while still retaining effectiveness in fear-causing. To examine whether the system is actually capable of causing a fear toward earthquakes, a subjective evaluation was conducted. The results showed that the perceived seismic intensity scales were consistently lower than simulated intensity scales, while the system could actually cause a fear toward earthquakes.
A great number of driving simulators with visual presentation have been developed, but little is known about the perception of a topographic surface induced by visual and vestibular stimuli when a user runs over a bump or hole. In this paper, we conducted a user study to assess how congruence or incongruence of visual and vestibular shape cues influence the perception of a topographic surface. Experimental results show that the vestibular shape cue contributed to making the shape perception more than the visual one. The results of a linear regression analysis showed that performance with visual unimodal and vestibular unimodal cues could account for that with visuo-vestibular multimodal cues.
In this paper, we investigated the stability of successive omnidirectional appearance manipulation for the multiple projector camera systems without synchronization between systems. This type of system comprises several surrounding projector camera units, where each unit projects illumination independently onto a different aspect of a target object based on feedback from the projector cameras. Thus, the system can facilitate appearance manipulation from any viewpoint in the surrounding area. An advantage of this system is that it does not require information sharing or a geometrical model. However, this approach is problematic because the stability of the total control system cannot be guaranteed even if the feedback system of each projector cameras stable. Therefore, we simulated the feedback from the behavior of projector camera systems to evaluate its stability. Based on hard ware experiments, we confirmed the stability of omnidirectional appearance manipulation by using two projector camera units in an interference condition. The results showed that the object's appearance could be manipulated throughout approximately 296 degrees of the total circumference of the target object. Furthermore, we put into practice whole circumference of the object by using a plate mirror.
This paper proposes a method to suppress view dependent deterioration of image quality for integral volumetric imaging displays. Most of the volumetric displays present depth information by the layered image planes. An observer can see a correct image right in front of the display, while the pixels are shifted from one another when an observer sees it from the corner of the display. The shift of pixels causes emergence of band-shaped noise composed of bright part and dark part. The authors realize suppression of the noise by a non-negative edge filter, which enables reproduction of the original image only with addition.
This paper reports a design of a new haptic feedback technique for desktop applications which creates a sensation of impact with modulated transient visual and tactile feedback. A transient vibration or impulse tactile stimulus was presented when a computer mouse cursor made contact with a virtual object on the screen, with changing the visual motion of the cursor. Two experiments were performed to compare an effect of stimulus combination and find an effective time lag of visuotactile stimuli to generate a clear sensation of impact. Experimental evaluations showed that a sensation of impact was successfully induced by tactile stimuli of either single pulse or damped oscillation while a strong sensation of vibration was not induced, and was induced when tactile stimuli were presented 30-120 ms after the cursor made contact.
This paper presents a smart eyewear that uses embedded photo reflective sensors and machine learning to recognize wearer's facial expressions in daily life. We leverage the skin deformation when wearers change their facial expressions. With small photo reflective sensors, we measure the proximity between the skin surface on a face and the eyewear frame where 17 sensors are integrated. A Support Vector Machine (SVM) algorithm was applied for the sensor information. The sensors can cover various facial muscle movements and can be integrated into everyday glasses.
The main contributions of our work are as follows. (1) The eyewear recognizes eight facial expressions (92.8% accuracy for one time use and 78.1% for multiple use in a different day). (2) It is designed and implemented considering social acceptability. The device looks like normal eyewear, so users can wear it anytime, anywhere. (3) Initial field trials in daily life were undertaken.
Our work is one of the first attempts to detect and evaluate a variety of facial expressions in the form of an unobtrusive wearable device.