Galvanic Tongue Stimulation (GTS) is the technique that can induce electrical taste or metallic taste virtually and inhibit and enhance taste induced by water solution. This technique is expected to use for diet support device. However, conventional GTS required to attached electrodes in the mouth. It causes uncomfortable to use for the purpose. Therefore, we invented Galvanic Jaw Stimulation (GJS) which induces and modulates taste without electrodes in mouth. In this paper, we demonstrated whether GJS can induce virtual taste and modulate salty taste induced by NaCl water solution.
This work is the first reports about the novel method which can induce taste at throat by electrical stimulation. In this method electrodes attached on inferior part of jaw and back of neck. We demonstrated in this paper that the stimulation method induce electric or metallic taste at throat. This method would be used for modifying the eating experience.
From previous studies, there are two effects with cathodal current stimulation (i.e., Taste suppression and taste enhancement). Taste suppression is evoked during applying stimulation and taste enhancement is evoked when the stimulation is ended. Electrical stimulation can enhance taste without additional tastant. Therefore, it is expected to apply to the support of diet. However, the duration of taste enhancement was too short time to use the purpose. Thus, we demonstrated, in this paper, that the repetitive square current stimulation which is novel method has continuous taste enhancement effect in sodium chloride and glutamic sodium.
In this paper, a new AR furniture arrangement system is proposed. AR furniture arrangement systems are useful because users can consider design without moving or buying real furniture. However, most of such systems require a visual marker or an external tracking system thus limiting working space, and forces users to physically move to change their viewpoints. In order to address these problems, our proposed system uses a depth camera attached to a video see-through HMD to reconstruct the real environment and to track user position. It also recommends a secondary viewpoint suitable for furniture arrangement and allows the user to change the viewpoint flexibly using the reconstructed model. Experimental results show that users can grasp the arrangement easier with our system compared to a conventional AR system.
This study proposes and evaluates the effectiveness of hybrid object and screen stabilized visualization techniques for an assembly support system using augmented reality (AR). Object stabilized visualization techniques are frequently used in AR-based assembly support systems but are not always available for the types of tasks, large sizes of assembled models, and narrow field-of-view head mounted displays (HMDs) examined in this paper. Based on a pilot study that investigated the best display locations and content sizes for displaying guidance information on an HMD screen, we propose two hybrid object and screen stabilized visualization modes, and then evaluate the two modes with an object stabilized visualization mode that we studied in the previous works (the side-by-side mode). Our experimental results indicate that the hybrid mode showing target assembly objects at a fixed position on the HMD screen with object stabilized orientation yields the best performance and subjective rating.
Localizing the user by using a feature database of a scene is a basic and necessary step for presentation of localized augmented reality (AR) content. Due to the time and effort in preparing such a database, only a single appearance of the scene is commonly stored. The appearance depends on various factors, e.g., position of the sun and cloudiness. Observing the scene under different lighting conditions results in a decrease in the success rate and the accuracy of localization.
To address these problems, we propose to generate a feature database from the simulated appearance of the scene model under different lighting conditions. We also propose to extend the feature descriptors used in the localization with a parametric representation of their changes under varying lighting conditions. We compare our method with the standard representation and matching based on L2-norm in a simulation and real-world experiments. Our results show that our simulated environment is a satisfactory representation of the scene's appearance and improves feature matching from a single database. The proposed feature descriptor achieves a higher localization ratio with fewer feature points and a lower processing cost.
This research aims to reduce stress of passengers who get on autonomous vehicles. We developed an Augmented Reality (AR) display system on an experimental vehicle, which aims to reduce stress caused by two kinds of invisible factor. One is visual information of invisible region such as out of sight occluded by the vehicle. The other is instrument information to automatically control the vehicle. Our proposed system visualizes the occluded road surface by seeing-through the interior and exterior of the vehicle. Moreover, computer graphics model of the vehicle trail are overlaid on the displayed image using AR technique so that the passengers can easily confirm the auto driving control is working correctly. The displayed images enable passengers to comprehend the road condition and the expected vehicle route in occluded region. In order to confirm the effectiveness of our proposed method, we develop a prototype display system on a vehicle and investigate mental stress using instruments to measure physiological indices such as heart rate variability, sweat information and Electroencephalogram.
In the new curriculum starting from 2020, programming education become mandatory from compulsory education. In programming education, understanding algorithm is important. For example, sorting algorithms that can arrange data in order are typical algorithms. However, there was a problem that it is not always easy to understand the sorting algorithm. Therefore, in this research, we propose teaching materials to learn sorting algorithms using augmented reality technology.
Maintaining one's personal space is quite important in leading a comfortable social life. However, it is difficult to maintain an appropriate interpersonal distance all the time. If we can control the visual distance to reduce the discomfort, the problem would be solved. To confirm the effectiveness of this solution, we made an interpersonal distance control system with a video see-through system, consisting of a head-mounted display (HMD), depth sensor, and RGB camera. The system controls the interpersonal distance by changing the size of the person in the HMD view. We conducted an experiment to confirm the capability of the system and found that the system can reduce the discomfort because of the inappropriate interpersonal distance by controlling the visual interpersonal distance.
This paper proposes a novel trainer for training of blood collection with an injector, an essential skill in medical and nursing fields. An optical motion capturing system and a projector are employed in the proposed system, in addition to the conventional blood collection trainer. The proposed system guides the injector to ideal position and posture until piercing the surface of the simulator. Then, after piercing, subsurface information including the position and the depth of the tip is indicated. Experimental results with subjects have demonstrated that the feedback provided by the proposed system is useful to improve the skill of the blood collection.
A computer display that is sufficiently realistic such that the difference between a presented image and a real object cannot be discerned is in high demand in a wide range of fields, such as entertainment, digital signage, and design industry. To reproduce the realistic image with the three-dimensional shape and material appearances simultaneously, we propose a system that places physical elements at desired locations to create a visual image that is perceivable by the naked eye. This configuration can be realized by exploiting characteristics of human persistence of vision. If high-speed spatially varying illumination is projected to the actuated physical elements possessing various appearances at the desired timing, a realistic visual image that can be transformed dynamically by simply modifying the lighting pattern can be obtained. We call the proposed display technology Phyxel. This paper describes the proposed configuration and required performance for Phyxel. Also, we have evaluated our implemented prototype, demonstrated some applications and made the limitations manifest.
Examining an old photograph at the exact location where it was captured helps understanding the photograph deeply. We developed an AR application for mobile devices designed to provide a virtual experience of past scenery by superimposing old photographs on current landscapes. We introduced the application to a walking guided tour. Designing a route of walking guided tour and authoring AR contents shown on the tour was done collaboratively with us and tour guides. The tour titled ” Travel around Fukagawa at the time of last olympic, 1964” took place in Koto city, Tokyo with 12 participants. The tour was planned for 1.5 hour tour and it had 13 old photographs. This paper describes implementation of the application, design process of a walking guided tour, and feedback of the tour from both participants and guides.
Main research topics related to diminished reality (DR) are to mitigate the geometric and photometric gaps between the real of undesirable objects and synthesized background images to diminish the objects feasibly. However, most of DR literatures describe a monocular video see-through (VST) type system while very a few showed a binocular stereoscopic DR system. That is, it can be said that the problem concerning binocular stereopsis in DR is virtually unconfirmed. This paper shows the first evidence of two types of binocular mismatching effects which can appear in observation-based DR using VST head-mounted display. Our experiments demonstrated the following issues: (i) Parallax jitters in image-based rendering-based hidden view recovery due to insufficient number of observation viewpoints and (ii) Unnatural depth perception at boundaries between real and synthesized regions due to their photometric gaps.
We discovered the “R-V Dynamics Illusion,” a psychophysical phenomenon caused by the difference between the dynamics of real object (R) and virtual object (V) in mixed reality (MR) space. Previously, we confirmed that the real object is lightly perceived by a MR visual stimulation with a movable portion. In this paper, as a next step, we experiment to measure how much lighter the real mass will be perceived by the MR visual stimulation. In addition, further experiments will be conducted to change the mass of the real object as well as changing the liquid capacity of the CG to see the change in weight perception caused by R-V Dynamics Illusion.
We have been studying the illusion phenomenon “R-V Dynamics Illusion” caused by different motional states of real object (R) and virtual object (V). Previously, we discovered a phenomenon that was when superimposing moving virtual liquid on a real object, the weight was perceived as being lighter than an actual object. As a next step, this paper confirmed that the same illusion phenomenon occurred even when the visual stimulus was changed from a liquid to a rigid body. Furthermore, if the visual stimulus is a rigid body, the impressions of the sound and the tactile sense will be strengthened when the object collides with this rigid body, so it is considered that not only the visual stimulus but also the influence of auditory and tactile senses will grow larger. Therefore, we conducted an experiment and analysis to find out what kind of influence be caused to this illusion when we also placed auditory and tactile stimuli.
In this study, we considered a wearable tactile device. This device was composed of an interface part fabricated by 3D printing, pins, and cantilever-type actuators. The device was compact and had the ability to stimulate the mechanoreceptors of the fingertips. Therefore, we propose using microelectronic mechanical-system (MEMS) technology to reduce device size. We used deformation finite element analysis to design multi-tile and multi-layer cantilever-type actuators for achieving the required displacement with piezoelectric actuators. This device could stimulate within the double-point threshold of the fingertips.
We examined whether a cutaneous cue suggesting forward self-motion facilitated or inhibited vection. We provided an air flow to subjects' faces by using an electric fan. There were two conditions in wind temperature, i.e. hot and normal temperatures. Vection strength was increasedby the air flow of normal temperature and also it was inhibited when the wind was hot.