We have previously created two interactive art works inspired by a wiggling tentacle of the sea anemone and by grass blowing in the wind. Our first project is a plant-shaped robot called "Himawari." Our second project, titled "plant" is an interactive art that trembles artificial foliage. These works have shape memory alloy actuators we developed. In this paper, we introduce our latest work, an interactive art called "Tentacles," inspired by the sea anemone. It has 55 SMA (shape-memory alloy) actuators that we developed as tentacles. The slow movement of tentacles looks like the sea anemone dancing on the waves. "Tentacles" provides users with a comfortable interactive experience by synchronizing the light, sound and movement of tentacles, creating an otherworldly space. We discuss the expression of wiggling tentacles in the interactive art through design, implementation and exhibition of "Tentacles."
The augmented reality (AR) technology is based on environment recognition systems which can estimate 3D environment in real-time. In this paper, we introduce a new environment recognition system which can estimate not only 3D environment consisted of static feature-points but also poses of shape-and-pattern-known dynamic objects with a single camera simultaneously. Our system is based on the Extended Kalman Filter (EKF). The EKF in our system has state variables which have camera pose, static feature-point positions and dynamic object poses. As the EKF updates the state variables with measurement data by camera, our system can estimate the dynamic environment. The EKF updates the state variables with two types of measurement model. The first model is for the feature points and the second model is for the dynamic objects.
This paper proposes a visual support system for planning stage lights control by using Virtual-Diorama and Mixed-Reality (MR). The Virtual Diorama visualizes effects of stage lighting so that every user can easily share a same image of the designed lighting. As the result the efficiency of the planning of stage lighting control can be improved. The MR technique solves problems in rehearsal such as limitation of practice-time and available lighting instruments. We implemented a pilot system of our proposed methods, and conduct on experimental evaluation to show the effectiveness.
This paper clarifies effects of the head-and-neck somatic and balance sense information in horizontal sound localization. In the head-still condition, a listener localized static-binaural signal keeping his head still. In the head-movement condition, a listener localized dynamic-binaural signal which is recorded with a steerable dummy head controlled by his head rotation. In the hand-movement conditions, a listener controlled the steerable dummy head by steering wheel while he kept his head still. Mean correct sound localization rate of four listeners for the head-still condition was significantly lower than those of other conditions. There was no significant difference between those of head- and hand-movement conditions. This result suggests that a listener can localize sound correctly when dynamic auditory information yielded by the rotation of a dummy head controlled by listener's spontaneous movements are available, even in the hand-movement condition where no head-and-neck somatic and balance sense information are available.
This paper proposes a method for reducing fatigue during handling medium-weight objects and augmenting our endurance by affecting our weight perception with augmented reality technology. Psychological studies revealed our weight perception during handling objects is affected by their visual property. We hypothesized that this illusionary effect in weight perception can be applied to reduce fatigue during handling medium-weight objects without mechatronics-based physical assistance which requires complex structures and various costs. We propose an augmented reality system which changes the brightness value of an object in order to reduce weariness during handling it. We conducted two fundamental experiments to investigate the effectiveness of the proposed system. Subjective and quantitative evaluation suggested that our system reduces fatigue during the handling task and improves energy efficiency by cutting away excess energy.
In this paper, we propose a system for high definition wire driven haptic display. There is a problem that when we display very stiff virtual wall, the virtual wall may become active. In order to solve it, we pay attention to energy which virtual coupling makes and use human information. When we use haptic display, dynamics of our fingers and arm always change and adjust with virtual object. Therefore, in this study, we define bio-feedback which use human dynamics for adapting world. We improve stability and fidelity of haptic display with bio-feedback.
Because the traditional folk dances are valuable intangible assets, we propose a Realtime Dance Learning Assistant System (R-DLAS) in order to assist in passing down these dances to the next generations. In the R-DLAS, we capture student's motion using a motion capturing system(MoCap), and then make a CG animation. We display its animation in realtime overlapping on a teacher's one. made by using recorded teacher's MoCap data. While a student is practicing a dance, the student cannot perform a dance at the same speed as a teacher does, and also it is difficult to control the system using some input device. Hence, the R-DLAS is equipped with two functions; 1) synchronizing a student's CG character with a teacher's one, and 2) generating a CG camerawork automatically. The experimental result shows that we could effectively learn a traditional folk dance using this system.
We propose a 3D selection method with multiple multi-touch touchpads. We employ two touchpads, which allows users to rest his arms during manipulation. Our method uses an asymmetric bimanual technique to define a 3D volume which in the best case requires only a single action. The technique also supports other typical manipulations, such as object and camera translation and rotation. The proposed method is compared with a double mice implementation, resulting that subjects interacting with the proposed method were less accurate but quicker to select objects.
We have proposed a 3D shape model retrieval system using physical blocks as a query. In this paper, we use not only the shape constructed by a user but also the its motion input by users as queries. The motions are measured with a gyroscopic sensor equipped in the base block and translated into several features such as average, variance, energy, frequency domain entropy, etc. We conducted experiments to evaluate the accuracy of model recognition and motion similarity among users and models using a classifier (Support Vector Machine). We also verified the effectiveness of adding motion into the shape retrieval through the analysis of retrieval results.
This paper describes an evaluation about a guide function of a museum exhibition using immersive mixed reality technology. In this system, the user utilizes a binocular hand-held display (HHD) to observe the skeleton of a dinosaur fossil 1.6 meters in length. The exhibition presents a fundamental explanation of dinosaur bone structure. In this study, a guide function was developed to urge the user to adjust to the appropriate position to view the content. The HHD position data compared the results with the guide function to those without the guide function. According to the results, users' behaviors differed significantly; the guide function effect was confirmed.
We propose a air pressured facial tactile display using vortex air canon applied for VR theater system. First of all, small vortex air canon system has been developed and multiple of those system were unified so as to be attached to a large screen for VR theater system. We also assessed that different strength of air canon generates various tactile expressions by subjective evaluation. And as for an overall system evaluation of the effectiveness of reality or entertainment, we used VR theater system with air pressured facial tactile display for playing horror movie and conducted questionnaire to participants. Result shows positive effect of the air pressure when to correlate to the contents of the movie.
Recent years, MR (Mixed Reality) systems have been used for virtual reconstruction of cultural heritage assets. In outdoor environment, it's difficult to achive the geometric consistency because of illumination changes and a lot of obstacles (occlusions). If the portions of the buildings still remain, it is advantageous to use 3d models obtained from the remained objects to solve these issues. Therefore, we propose a robust method to achieve geometric consistency between real and virtul objects under various illumination conditions by using 3D geometric models. The experimental results in which the proposed method is applied to reconstruct the Roman cultural heritage site show the effectiveness of our method.
We propose a 360-degree interactive fog display which provides different images according to observers' position. The proposed display utilizes forward light scattering of fog. Due to the strong directional scattering, multiple images which are projected from different directions on a cylindrical fog screen can be transmitted to different observers. The fog display brings motion parallax to observers that can recognize the 3D structure of the presented objects. Another advantage of the fog display is that observers can insert their hands into the screen. This property gives the observers a consistent feeling of distance which comes from both visual and somatic sensations.
Laser-Plasma Scanning 3D Display (LPSD) is one of the most advanced volume scanning display devices, and plots a unicursal series of illuminants freely in the air. Though the LPSD device has been repeatedly upgraded, even the latest version is only capable of plotting an object at a rate of 5,000 illuminants per second, purely due to the hardware limitation. If we plotted illuminants at a regular interval with a constant brightness, it might possibly lead to loss of geometry and/or surface texture of the displayed object. This paper, therefore, proposes a software-based approach to ameliorate this problem, which controls the brightness and density of plotted illuminants in accordance with existing surface descriptors, to convey salient features of 3D shapes with even a limited number of illuminants.
An approach to realize skillful object manipulation in virtual environment was investigated. Environment for virtual juggling was implemented; visual display that has large vertical field of view was implemented using two-screen projection system; models of both hand were updated using glove-type sensors and magnetic trackers; simulation algorithm was tuned to support interaction with fast-moving objects; auditory feedback contact of as an alternative to haptic sensation was also introduced. Experiments to evaluate performance of juggling were carried out. The result suggests that the environment allow for juggling task to an extent. Also, it was proved that auditory feedback of contact is useful in the task.
For the purposes of information presentation and entertainment use, several devices that can mechanically change their shapes have been proposed. However, they present a problem in the fact that their range of movement is proportionally fixed to the size of the unit. To address this challenge, we devised and have developed an extension actuator named "reel actuator." The current model of this actuator has the ability to extend its rod up to approximately 27 times the unit's body height (4,000mm stroke against 150mm storage space). And we implemented some 3D shape forming system using reel actuators.
This paper proposes a new Mixed-Reality presentation of spatio-temporal behaviour of a moving object in a observer viewpoint. The behaviour of the moving object was recorded in advance as a sequence of pairs of spatial position and its texture by environmental camera. A reviewer can see simultaneous replays of the moving object so that he/she can shorten the time to browse the behaviour. Since large number of simultaneous replays increases the chance of overlapping textures, we propose the method to determine the best number of replays. We also propose an enriched visualization of trajectory of the moving object so that the reviewer can recognize the moving direction of the object easily. We have conducted subject tests on our proposed traject visibility and simultaneous replays of the moving object and show the results.
In our research, we aim to construct a perception-based visuo-haptic shape display system that can provide users with the sensation of touching virtual objects of varying shapes, using cross-modal effects. In this paper, we focus on modifying the identification of a shape of a curved surface and an angle of edges when touching it with a pointing finger, by displacing the visual representation of the user's hand. We compose a video see-through system through which we can change the shapes of objects the user is visually touching, and displace the visual representation of the user's hand as if he were touching the visual shape although in actuality he is touching another shape. Our experimental results show that over 80% of the participants answered that they perceived the virtual object's shape to be different from the actual shape they touched for curved surface of shape, and the perceived angles of edges could be modified if the difference of angles between edges is in the range of -35° to 30°. These result proves the possibility for constructing a novel visuo-haptic system.
Through two psychophysical studies, we investigate the effect of the shape of a projection surface on the readability of projected characters from a PC projector. The first study was designed based on a paired comparison test. As a result, we found that the depth amplitude was correlated to the readability of projected characters. The legibility was significantly decreased when the frequency was 2 cycles per a character width. In a previous work, we found the readability of a projected character was improved when it was moving. In this paper, we investigate the effect of the speed and distance of the movement on the readability of projected characters. Through the second study, we found the readability was improved when the speed was more than 2/3 of character's width and when the distance was more than a wavelength of the surface. The results of this research would be useful in automatic view management of projection based annotations.
This paper proposes a method for rendering an inter-reflection between a cubic marker and a glossy surface in Augmented Reality. The proposed method for rendering the inter-reflection consists of three processes: at first, a reflectance ratio of the floor is estimated from the reflection on the floor and from the marker box directly. Then the roughness of the floor is estimated based on the sharpness of the reflected marker box image on the floor. Lastly, the marker box reflection is eliminated based on the surrounding colors of the marker box reflection. Experimental results have demonstrated that the inter-reflection can be rendered in real-time by using the proposed method.
We have proposed acoustic targeted drug/gene delivery by micro-bubble control. To apply the techniques in vivo, the focus positioning of acoustic field to blind targets is required. In this study, we proposed and evaluated a visualization system based on AR technology to integrate an echogram, an ultrasound probe, and an ultrasound transducer for micro-bubble control. From the results, we confirmed the system visualized the relative position between targets and the focus intuitively and quantitatively. It was demonstrated the system has a potential for active control of microbubbles in vivo.
In this study, a novel virtual walking system has been developed for putting the concept of advocating a positive acceptance of the later stages of life or aging, called smart aging, into practice. An essential core of the system can be found in striking the right balance between virtual and real worlds for old people to use pleasantly and safely. The system consists of a laptop computer, micro-projectors, and a completely new screen system mounted on the walking frame, called "walking frame mounted display (WFMD)." Users can physically walk with pleasure by the virtual reality technology and safely by the WFMD in the virtual environment. Furthermore, the WFMD does not require any sensors to be worn and is sufficiently inexpensive for home use. Comparison of experimental results clearly demonstrates that the real walking in a virtual environment using the WFMD is easier to use for old people than stamping their feet on the "Wii balance board" in the same environment.
The purpose of this study is improving operations on whiteboard environments. Therefore, we focus attention on using real objects as user interface devices. The contents and drawing data on a whiteboard can be moved, reproduced, and reused easily with these devices. The features of our proposed whiteboard system include a SAR (Spatial Augmented Reality) and a high affinity for the working styles on a whiteboard with magnets, erasers and pointing sticks. In the evaluation experiment, the subjects were able to use the enhanced functions easily, and we obtained opinions that the system is useful for computer-supported lessons.