We have proposed the auscultation training system EARS which reproduces the vital sound of various cases on the simulated patient using the augmented reality technology. In this study, EARS using auscultation position detection by deep learning was developed. To detect a chest piece and a simulated patient by using deep learning from an image photographed by a camera integrated stethoscope device. A biological sound corresponding to the auscultation position calculated from the detection results is reproduced. Stable auscultation position detection was confirmed, and it was introduced into actual medical education field, and the high evaluation was obtained by the questionnaire.
Due to the decrease in the labor force in the service industry, many face-toface customer service training simulators using virtual reality (VR) have been developed. The training simulator involves a dialogue with VR avatars, and it is necessary to give a sense of reality to the dialogue in order to perform training efficiently. In this study, we focused on the impression of avatars as an element that gives a sense of reality, and tried to manipulate the impression by changing the alternation latency of the avatar. An experiment was conducted in which the phrases expressing the impression of avatars were applied to various conditions with different avatar alternation latency, and significant differences were found in some items. This result suggests that the impression can be changed by changing the alternation latency of the avatar, and the reality of the dialogue can be improved.
This paper describes the evaluation of a VR content that presents the biometrics data of a fencing athlete during a match as audio-visual stimuli. We conducted two experiments: 1) the effect of marking the gaze point, 2) the effect of combining the forearm movements of the athlete and participant. The results of the first experiment showed that this approach significantly led the participants’ eyes to athlete’s gaze point. The results of second experiment showed this approach did not affect participants’ ownership and agency. However, some people were induced their forearm movement to athletes’ movement.
It is known that when the contexts at the encoding and the retrieval match, retrieval performance improves, and when they do not match, retrieval performance declines. This effect is called the reinstatement effect. In this study, we used two spherical images and manipulated the environmental context at encoding and retrieval using virtual reality. The number of words recalled in free recall test was compared in a two-factor within-subjects design like Godden & Baddeley (1975), and no reinstatement effect was found. In addition, there was a positive correlation between presence measured by IPQ and number of words recalled. The results are discussed in terms of the mental context of the virtual reality experiences, which is distinguished from that of real experiences.
Self-distancing is the method of adjusting the psychological distance from one’s own experience. Keeping a psychological distance from the issue affects how we address it. For example, people often devise more creative ideas for others’ problems than for their own problems. In this study, we employed virtual reality for supporting self-distancing. We developed a system that enables the user to operate his/her avatar from the third-person perspective by changing the user’s perspective out of the body. We conducted an experiment in which participants were asked to solve problems that require insight and creative thinking from either the first-person or third-person perspective (3PP). The results indicate that the 3PP increases users’ psychological distance from their experiences and brings greater insight. However, there was no significant difference in the number of ideas that users could devise. Based on these findings, we discuss the interface design required for incorporating self-distancing, regardless of users’ abilities and circumstances.
Previous research has shown that human walking speed is influenced by visual stimuli in the environment. For example, the speed of optical flow in one’s peripheral vision negatively correlates with his/her walking speed. However, the effect of seeing the other walkers in the peripheral on his/her walking speed is unclear. In this study, we investigate how seeing avatars walking alongside a person affects his/her walking speed. In our experiments, participants wore a head-mounted display and saw three types of virtual avatars walking alongside him/her: whole-body silhouettes of walking persons, point-light biological motions, and spatially scrambled point-light motions. Results show that the walking whole-body silhouettes and the point-light biological motion significantly increased the participants’ walking speed, and a significant interaction between their global translational speed and local motion speed was found to change the participants’ walking speed. These results indicate that we can apply a similar technique to control multiple persons’ walking speed in the real environment.