Conversational agents are beginning to become popular in various scenes of daily life, and it is considered that many users will use it in the future. However, since the conversation with the agent is impersonal, many users seems not to feel familiar with such agents. To address this issue, we focused on the relationship between humor and familiarity. Based on this idea, we propose an agent that creates humor by hearing wrongly the word that has a dissimilar meaning and similar sound instead of the word that the user has said. This makes it possible for the agent to bring humor into the conversation, thus the user can feel familiarity with the agent.
In this research, we propose a method that estimates the softness of an object based on the motion images of a hand and a forearm when a target object is pushed with a stick. The softness of the object is estimated from those images by using deep learning. For the motion recognition, we capture a series of RGB-D images with a depth camera. A subject pushes objects of different softness with a stick for collecting motion images for learning. Then the captured images are learned through Convolutional Neural Network and their characteristics are parameterized appropriately to achieve the softness estimation system. The results of softness estimation show that root mean square error of the estimated value of non-learned softness scores within 5 points in durometer hardness. It means that pushing motions of human beings include tactile information that leads to estimate the target object softness and our system can recognize it accurately. We also confirmed that using all the 3 types of images (RGB-image, depth image and Canny edge image) as the input results in the highest accuracy for both personalized and generalized networks.
This paper presents a colonoscope tracking method for colonoscope navigation. Understanding colonoscope position in the colon during colonoscopic examinations is difficult due to unclear camera view. Computerized colonoscope navigation based on an accurate colonoscope position identification method in the colon is required. The position identification is called as tracking. We propose a colonoscope tracking method using a colon deformation estimation technique. Colonoscope tracking is difficult because the colon largely deforms during colonoscope insertions. We estimate colon deformation using recurrent neural network-based estimation method. The colonoscope tip position is mapped to a CT image coordinate system to combine the position with a colon shape and polyp positions. We evaluated the colonoscope tracking accuracy in a colon phantom study. From our experiments, tracking errors were small compared to a previous method. The proposed method has potential to perform tracking in clinical fields.
We investigated whether a sensory information, which is not related with self-motion originally, can be recruited as a new cue for self-motion. In the learning phase of an experimental trial, stimulus color changed depending on the acceleration of body rotation about the yaw axis. The stimulus color changed to red when subjects rotated with clockwise acceleration and to green when subjects rotated with counterclockwise acceleration, or vice versa. In the measurement phases before and after the learning phase, subjects viewed the rotating stimulus with or without new self-motion (color) cue and responded the occurrence and magnitude of vection. The results showed that the color information accompanied with self-motion affected the latency of vection, suggesting that new self-motion cue of color could contribute to generate vection.
This paper describes methods of presenting sense of weight in VR for a wide range of masses. Two approaches are proposed; while lifting a virtual object, 1) the object follows participant's hand with delay, 2) the distance between the hand and the object is increased. These approaches were evaluated in a user study that utilized magnitude estimation and gripping force measurements. The results suggest that participants could perceive multiple different weights using both approaches. Moreover, the ratio between minimum and maximum value of magnitude estimations was 3.2 in the first approach and 4.4 in the second approach.
In recent years, researches to present perceptual information using the illusion caused by cross-modal phenomenon are actively conducted. Especially the illusion on tactile and haptic sensation is called Pseudo Haptics and considered as a method of haptic sensation presentation. In these studies, only the visual stimulus is presented to induce illusion, but there are still problems such as the weakness of the sensation obtained when considering actual use. In this study, we researched whether we can obtain sensation not only from visual stimulation but also from auditory stimulation. Specifically, experiments were conducted to obtain pseudo force sense from interaction with elastic virtual object. As a result of the experiment, a pseudo force sense was obtained from auditory stimulation. In addition, by presenting visual stimuli and auditory stimuli in combination, stronger pseudo force sense was obtained. Also, we applied the pseudo force sense by auditory stimuli to an electronic musical instrument and produced a prototype. As a result of the evaluation experiment, it turned out that the users enjoyed the prototype.