The author is working on an ultrasonic device that produces tactile sensation in midair by focusing ultrasound. The high-intensity ultrasound at the focal point presses the skin surface. Although ultrasound itself is inaudible, audible noise is radiated when the focal point moves around. This noise is not desired for a tactile display because the noise makes the experience worse. This drawback has been found and reported in previous work, however no solution has been provided. This paper discusses the source of this noise and proposes a method to reduce it. The phase discontinuity of driving signals occurs when the focal point moves to the next position, and that leads to the sudden change of amplitude of ultrasonic waves. This sudden change of amplitude is the source of the noise. There are two ways to reduce this noise: One is making the original and target focal positions close and the other is changing the phase gradually. The former is effective only when the frame rate is high. Therefore we explore the latter. The behavior of the focal point during the phase shift is studied by simulation. An algorithm is developed and implemented into the current prototype device. The experimental results show the effectiveness of the proposed method.
To reproduce tracing sensation on a surface, it was considered to require a skin-deformation device with at least two degrees of freedom. In contrast to such a conventional design, we hypothesized that the direction of shear deformation does not affect the realistic of tracing sensation, thus the device mechanism can be simplified. The experimental results showed that even when the direction of skin deformation was opposite or perpendicular to the finger movement, participants still perceived it as natural. Therefore, we concluded that when the finger moves, we do not need to precisely reproduce the direction of skin deformation.
We proposes a novel method that renders haptic perceptions only using a touch screen based on visuo-haptic interaction. Our proposed method evokes a feeling of resistive force by making a discrepancy between the movement of the finger swiping the screen and the background image which moves according to the finger movement. First experiment shows that the proposed method can evoke perception of resistance according to the constant ratio between the displacement of the finger and background image. That is to say, the user perceives strong resistance to move when the background image moves slower than the finger. Moreover, same experiment also suggested that the evoked haptic perception becomes stronger when the method is applied to frequently repeated movements. Second Experiment shows that visibility of the finger on touch screen is important factor for proposed method.
This paper proposes a thermal display that can present a virtual thermal sensation to the finger pad using thermal stimuli on the side of the finger. The proposed method could help realize the thermal augmentation of real objects or simplify the design of a haptic display. This paper investigates the effect of proposed method in terms of perceived strength and reaction time of the virtual thermal sensation. The results indicate that a tactile stimulation is required for a virtual thermal sensation to be perceived on the finger pad similar to that observed in the thermal referral phenomenon. Furthermore, the vibrotactile stimulation appears to improve both the perceived strength and the reaction time.
This paper describes the development of a multitouch haptic interface equipped with a movable touchscreen. When the relative position of two of a user's fingertips is fixed on a touchscreen, the fingers can be considered a hand-shaped rigid object. In such situations, a reaction force can be exerted on each finger using a three-degrees of freedom (3DOF) haptic interface. In this study, a prototype 3DOF haptic interface system comprising a touchscreen, a 6-axis force sensor, an X-Y stage, and a capstan drive system was developed. The developed system estimates the input force from fingers by using sensor data and each finger's position. And, the system generates reaction forces from virtual objects to the user's fingertips by controlling the static frictional force between each of the user's fingertips and the screen. The system enables users to perceive the shape of two-dimensional virtual objects displayed on the screen. Moreover, users can deform elastic virtual objects, and feel their rigidity through the interface. The effectivity of the prototype system was confirmed through some evaluation experiments.
The Hanger Reflex is a phenomenon in which the head rotates unintentionally when force is applied via a wire hanger placed on the head. It has been confirmed that this phenomenon is caused by pressure, and the direction of the Hanger Reflex contributes to the direction of skin deformation. In addition to the head, similar phenomena have been found in the wrist, waist, and ankle. Until now, we aimed for walking navigation without interpretation of navigation information using the Hanger Reflex, investigated the influence on the head-type, waist-type, and ankle-type Hanger Reflex on walking, it was confirmed that the waist-type Hanger Reflex most efficiently affects walking. However, assuming a scene to actually use as walking navigation, the current waist-type Hanger Reflex device is difficult to say that it is easy to use because the user oneself needs to shift the device. Moreover, in addition to scenes without interpretation of navigation information, it can be assumed that scenes with interpretation of navigation information, such as "Follow" or "Resist". In this paper, in order to use the waist-type Hanger Reflex for actual walking navigation, developed a controlled device of the waist-type Hanger Reflex using four pneumatic actuators, and investigated the effect of the waist-type Hanger Reflex on walking caused by difference in interpretation of navigation information. As a result, we confirmed that the developed waist-type Hanger Reflex device can control the walking path and body direction, depend on user's interpretation difference.
Recent studies have suggested that temperature sensation contributes to human motor control. We have been studying the possibility of controlling the standing posture by presenting the temperature change to the soles of the erect human beings. Although the relation between temperature and attitude was unclear in the research which we have done so far, in this thesis we asked for the point which did not take into account the natural natural gravity sway of human beings and re-experimented. The first half and the second half of the subject's foot sole were heated and cooled, the temperature was switched every 10 seconds, and the movement of the center of gravity at that time was recorded. As a result of the experiment, it was confirmed that the center of gravity was biased towards the cooled side under the condition that different temperatures were presented before and after.
We present two psychophysical experiments to quantitatively measure toe force required for inducing a tactile illusion, in which a human feels tactile sensations in the toe pad when a vibration is presented to the toenail while the toe pad is in contact with a surface. While previous literatures introduced the illusion only in the finger pads, we found that the illusion in the toe pad requires a certain level of intensity of vibration and toe force applied to the surface in a preliminary test. In the first experiment, we quantitatively measured the force thresholds for inducing the illusion with constant vibration intensity by method of limits. The results revealed that averages of lower and upper thresholds were 490 and 2370 gf, respectively, and the upper threshold included larger variance than the lower threshold. In the second experiment, we remeasured the lower threshold by method of constant stimuli to eliminate prediction bias. The results revealed that the average threshold was 330 gf positively correlated with the temperature of the toe pad.
It has been reported that haptic perception ability is diminished by wearing gloves and other clothing. To resolve this problem, we propose a transmission system of haptic information that consists of a haptic sensor on the surface of the clothing and a haptic display between the clothing and the skin. Using this system, haptic information is transmitted as if it penetrated the clothing. We named this concept “Haptic-Through”, and the system having this function “Haptic-Through Systems” collectively. In this paper, as an example of Haptic-Through Systems, we implemented and evaluate a finger-mounted Haptic-Through System.
We propose a system that presents a feeling of resistive force by modifying joint angles of user's avatar, and an approach to reduce feeling of discomfort evoked by a conflict between visual and proprioceptive sensations. Pseudo-haptic feedback enables us to provide haptic sensations by making discrepancy between the position of user's body in the real world and avatar which represents a part of user's body in a virtual environment without using any complicated devices. However, a larger discrepancy between proprioceptive and visual sensations causes a feeling of discomfort to a user, and leads to reduce the effectiveness of pseudo-haptic effect. Also, modifying a displacement of a body part cannot maintain a consistency of whole body parts of a user and an avatar under an immersive virtual environment. To avoid these problems, we proposed a pseudo-haptic approach of modifying a joint angle of avatar. Furthermore, in order to reduce perceptual deformation of an avatar, we introduced a novel approach of simultaneously modifying multiple joint angles of an avatar. Our experiments showed that modifying multiple joint angles of an avatar's arm can reduce a feeling of discomfort but still presents a certain intensity of resistive force, compared to changing only a single joint angle.
The head-mounted displays (HMD) allow people to enjoy immersive VR experience. A virtual avatar can be the representative of a user in the virtual environment. However, the expression of the virtual avatar with a HMD user is constrained. A major problem of wearing an HMD is that a large portion of one's face is occluded, making facial recognition difficult in an HMD-based virtual environment. To overcome this problem, we propose a facial expression mapping technology using retro-reflective photoelectric sensors. The sensors attached inside the HMD measures the distance between sensors and a face. The distance values of five basic facial expressions (Neutral, Happy, Angry, Surprised, and Sad) are used for training the neural network to estimate the facial expression of a user. Our system can also reproduce facial expression change in real-time through an existing avatar by using regression.
We examined whether the visually induced self-motion perception (vection) was influenced by body posture. The participants observed optic flow through head mounted display and judged the subjective strength of vection by using a 101-point rating scale. Experiment 1 showed that vection was weaker when face was moved upward than when face was to the front. However, there were no effect of torso position (seating, standing, and supine) and no interaction between torso and head positions. In Experiment 2, the result showed that vection was weaker the inverted torso-and-head position than the normal torso-and-head position. We suggest that vection is weaker a less experienced body posture than a habitually experienced body posture in daily life.
In Japan, fatal traffic accidents account for the majority of overall traffic fatalities among pedestrians. About 73% of the fatalities occur while pedestrians are crossing the road. As a result, many pedestrian fatalities occur at crosswalks. One of the major reasons as to why pedestrians are involved in traffic accidents is that they do not conform to safety standards. Therefore, we developed a crosswalk simulator to educate pedestrians about crossing the road, using augmented reality (AR). The simulator can present a virtual vehicle on a real road, making it possible to produce a pseudo-traffic environment. This simulator can facilitate safe road-crossing at a low cost. Subjects can learn of dangerous road-crossing through experience.
This study proposes an estimation method of listener's interaural time difference (ITD) based on the anthropometry of the listener's head. Ten anthropometric parameters of the heads and the ITDs on the horizontal plane for 33 subjects were measured. Then the multiple regression analysis of the ITD for each direction and the anthropometric parameters were carried out. The results show that the average of the correlation coefficients was 0.59 and the mean residual error was 16.5 µs. Accuracy of the multiple regression was verified using four naive subjects. The mean estimation error was 19.2 µs (2.3 degrees). Finally, the estimation accuracy of the proposed method was compared to that of previous method. The average estimation error of the proposed method was smaller (51.5 µs, 7.3 degrees) than those of the previous methods.
We propose a new retro-transmissive system called SkyAnchor, which consists of only optical devices: two mirrors and an aerial-imaging plate. The system reflects light from a light source under an object and forms an image around the object. This optical solution cannot cause any latency in principle and is effective for higher reality of mixed reality applications. We have implemented an interactive application in which visual images apart from the object change in accordance with the position of the object.
Fluorescence can be a beneficial property not only for the scene analysis, but also for a spatial augmented reality (SAR). In the field of SAR, the projector camera system has been well studied and its radiometric model can be described by a color mixing matrix. For the measurement of the color mixing matrix, special equipments other than a projector and a camera are not required. The fluorescent components are not separated by the color mixing matrix, but it already has a capability of handling light emission of fluorescent. In this paper, we propose the RKS model that expresses fluorescent components as well as reflectance components from the approximation of the Donaldson matrix. In addition, we propose a decomposition method of the color mixing matrix. Experimental results show the decomposition of color mixing matrices obtained from color construction papers and non-fluorescent Kent papers with several fluorescent paintings. Then, we demonstrate separation of reflectance and fluorescent components of a scene. We also demonstrate effectiveness of radiometric compensation that considers fluorescent components.
Vection has been studied in various methods. For example, experimental Psychological methods have a long history and brain science also afforded us with the knowledge of vection related brain areas. In this study, we newly introduce Phenomenological approach to vection research. We wanted to reveal the usefulness and importance of this method into vection research. Phenomenological approach will tell us a lot of things in vection. The details and examples of showing the power of Phenomenological approach were clearly presented in the body of this article. We want the readers to enjoy these Phenomenological analyses.
We analyzed changes in human gait (way of walking) that corresponded to changes in human gaze direction. For the purpose, we constructed an immersive walking environment in which we measured participant gait in various controlled gazing situations via a motion capture system and an eye tracker. The environment consisted of a treadmill and 180-degree multi-screen for presenting the gazing target. As preliminary analysis of gaze-gait relations, in this paper, we focused on arm and leg swing amplitudes as a measure of gait and analyzed the relationship between gaze and arm/leg swings. Unlike previous studies, we sought to analyze behavior that occurred when humans intentionally gazed at a specific target. Our experimental results indicate that arm swing is affected by gaze direction. We observed a tendency for decreased swing amplitude in the arm further from the gazing direction, and for increased swing amplitude in the arm closer to the gaze direction. Contrary to our results for arm swing, we did not observe any evidence of modulated leg motion by gaze direction. Our results suggest that it may be possible to estimate gaze from human gait.