This paper presents a locomotion interface using torus-shaped treadmill. Traveling on foot is the most intuitive way for locomotion. An infinite surface driven by actuators is an ideal device for creation of sense of walking. The Torus Treadmill employs 14 sets of treadmills. These treadmills are connected side-by-side and driven in perpendicular direction. The effect of infinite surface is generated by the motion of the treadmills. The walker can go in any direction while his/her position remains localized in the real world. The device has modular structure that enables portability for exhibition. It was exhibited in Ars Electronica 2011 in Linz Austria
This paper presents a novel radiometric compensation approach that applies a six spectral band projector. Conventional radiometric compensation techniques that applied normal three spectral band projectors suffered from the limited dynamic range of the projectors. To improve the compensation accuracies, we propose to control projected images with the six spectral band projector whose spectral intensity is narrower than normal three spectral band projectors. In particular, we apply the filters of channel separation-based stereo projection to realize the six spectral band projector. We evaluated our approach by projection experiments and confirmed that the compensation accuracy could be improved by the proposed method when comparing with conventional method. In particular, we confirmed that the color difference in CIELAB color space of the proposed method was 54.8 % of that of the conventional method.
We propose a method of recommending colors and fonts appropriate for texts based on the associations between words and colors. Colors and fonts were evaluated by SD scales using affective words in psychological experiments. First, our system estimates colors suitable for the mental representations of the inputted text. Then, the colors are described by affective words. Next, we obtain the degree of similarity between the colors and fonts, whose mental perceptions were evaluated by SD scales in the psychological experiments. Finally, our system recommends colors and fonts that are most appropriate for the inputted text. We verified the validity of our method for selecting appropriate text fonts and colors.
We designed concert scope headphones that are equipped with a projector, an inclination sensor on the top of the headphones, and a distance sensor on the outside right headphone. We previously developed sound scope headphones that enable users to change the sound mixing depending on their head direction. However, the system could not handle images. In contrast, our headphones let the user listening to and watching a music scene scope on a particular part that he or she wants to hear and see. For example, when listening to jazz, one might want to more clearly hear and see the guitar or sax being played. The user can hear the guitar or sax sound from a frontal position by moving their head to the left or right. The user can adjust the distance sensor on the headphones and focus on a particular part they want to hear and see by simply putting their hand behind their ear.
We propose a new method for enhancing a museum exhibit toward time axis by 3-D recording technology. There are two kinds of method of the time-based enhancement: (1) enhancement of the exhibit's surroundings including visitors and (2) enhancement of the exhibit itself in the category. We implemented a time-based enhancement system that recorded the exhibit and the surroundings as 3-D data using multiple depth cameras. We produced an exhibit "Time-leaping Seat" that consisted of our system and a real exhibit, and we displayed it in an exhibition at the Railway Museum for two weeks. A total of 1691 visitors experienced our exhibit, and 46 visitors and two curators answered our questionnaire for our evaluation. The responses implied that we archived out proposal. In future works, we will apply the proposal to telling a way of experience of an exhibit and effective understanding of an exhibit's background information.
We can share nonverbal emotional experiences, such as excitement and pleasure, by watching movies and sports events with others, like our friends and family. These shared experiences are thought to enhance excitement and pleasure as compared to when these activities are done alone. Our research provides this shared experience on the Internet by sharing a viewer's excitement while watching videos that are on the web. In this paper, we present a user study about the relationship between users' excitement while watching videos on the web and their impressions of those videos. Along with this study we also introduce a video player called ExciTube that allows users to share their excitement and vie other users' excitement as visual information alongside the video they are watching. A user's excitement is expressed and shared using an avatar. We carried out user-involved demonstrations of ExciTube at a Japanese symposium and confirmed that people did enjoy using the system and feeling other people's excitement.
People often communicate with others using social touch interactions including hugging, rubbing, and punching. We propose a soft social-touchable interface called "Emoballoon" that can recognize the types of social touch interactions. The proposed interface consists of a balloon and some sensors including a barometric pressure sensor inside of a balloon, and has a soft surface and ability to detect the force of the touch input. In this paper, we evaluate physical features of a balloon and its appropriateness as a form of interface to input social touch interactions. We construct the prototype of Emoballoon using a simple configuration based on the features of a balloon, and evaluate the implemented prototype. The evaluation indicates that our implementation can distinguish seven types of touch interactions with 83.5% accuracy. Finally, we discuss possibilities and future applications of the balloon-made interface.
sonodial is a sound installation that allows participants to intervene actively in the work. Visual and audio representations are given by participant's movements. When participants are in the exhibit space, artificial shadows are projected underfoot, and the form, length, and rotation speed of the shadows change according to the participant' s speed of motion and position. Participants can also interact with sound generation part. The movements of participant affect the parameters for sounds generated by granular synthesis. This paper describes the concept, system design and implementation of sonodial.
A transparent screen is demanded in various applications. Our proposition is to use a membrane of colloidal liquid. We developed an ultra-thin and flexible BRDF screen, which is soap film exposed to ultrasound. The transparency of the film is controlled by ultrasound so that it modifies projected images to be realistic, distinctive, and vivid. A multi-layered 3D screen is developed by placing the multiple films and changing their transparencies alternately and adequately. The film also can be deformed by higher-amplitude ultrasound. The proposed screen contributes to the real world as the first prototype of a new concept of programmable screen.
In resent years, in the field of virtual reality various system have been proposed to enhance virtual experience by providing haptic sensations. In our research we propose a handheld sized interface which can represent both shapes and textures of virtual objects. Using the metaphor of a rolling pin, we propose a rolling-pin-based interface, named a Petanko Roller, which enables users to experience the sensation of rolling out virtual doughy objects. The interface represents unevenness of virtual objects and the frictional forces acting on them. This time, we report the system design, implementation, an application for entertainment, and users' feedbacks in exhibitions.
From ancient times, approaches and discussions of finding some kind of life-like features from materials have been accomplished. Nowadays, approaches intending to implant life-like movements or behaviors to artificial structures are performed in the field of media art and robotics. We consider to present human interaction and creature-like movements to clusters of sphere shaped magnets without processing anything to the original material. In this research, we have expressed movements and deformation like a looper, by using arrayed electromagnets, which produces magnetic force in steps to the clusters of the sphere shaped magnets. Furthermore, the loopers are interactive and available to control depending on the users gestures by recognizing their hands. Through several exhibitions, we have collected reactions and opinions from the attendees. In this paper, we will conclude the overview, concept, design, implementation and outcomes from the exhibitions.
This research suggests "Material Syncretism", a vision that discovers new expression by the harmony between paper and computation and also by keeping the relationship between paper and humanity at the same time. As a base technology to implement "Material Syncretism", we developed the dynamic expression technique using conductive inks, conductive materials and thermochromic inks. And by adapting the dynamic expression technique, created a series of new paper expression, which are Anabiosis, Constellation, Storytelling, and Transience. These expressions implement the dynamic expression of harmony between paper materialism and computation, and also possess the high artistry that fascinates the audience.
To address complaints about irritating sounds on train station platforms beyond the limitation of conventional noise reduction approaches, we applied the hypersonic effect, which refers to the positive way in which inaudible complex high frequency components (HFCs) produces a salutary physiological and psychological effect on human through the activation of fundamental brain. We created a virtual platform acoustic environment in the experimental room as well as actual platform space by reproducing highly accurate broadband recordings of actual platform sounds. We developed hypersonic contents of effective HFCs obtained from a rainforest environment and developed hypersonic public announcements and hypersonic departure bells containing HFCs. We evaluated the psychological and physiological effects of hypersonic contents presented alongside the platform acoustic environment. Subjects showed significantly more positive impression of the acoustic environment and significantly greater alpha 2-electroenchephalography potential, indicating the efficacy of the hypersonic effect in ameliorating the unpleasantness of a noisy environment.
Recently, robots are more and more utilized in entertainment area. Stuffed toy robots or pet-like robots are developed to entertain or to heal people with intimate touch interactions. Softness of the robot is important for comfort in such interactions. However, most entertainment robots with expressive body motions include hard structures to actuate body, and they cause rough feelings in touch or hugging interactions. In this paper, we propose the soft-to-the-bone robot looks like a stuffed toy, driven by a novel mechanism made by a fabric material and pulling strings. The robot has large movable range and move speedy enough for various body motion expressions. Moreover, the proposed robot is evaluated to have more familiar and comfort impressions than conventional hard stuffed toy robots.
In this paper, we propose a computational model to generate the life-like motion for emitting and moving object (virtual firefly). Using a stochastic process in two stages, to generate a life-like motion includes various patterns by simple operation elements. By incorporating virtual fireflies, we build an animation system that virtual fireflies move and emit automatically. By experiments using a questionnaire, and verify that the virtual firefly look like a living thing, the difference between the animations gives impression.
This paper proposes a method for sharing impression, sentiment, and opinion among people listening to an auditory program together by voice sound effects. We call the system "Radi-Hey." In contrast to the conventional Laugh Tracks played by program staff in TV and radio programs, Radi-Hey reflects the input from audiences themselves. The audiences input their opinion by pushing buttons of several short words (e.g. "oh!", "why?") and can listen to the other audiences' opinion by voice sound effects. Recently, text-based systems (e.g. Twitter) have been used for this purpose, but the audiences are required to concentrate on inputting their message and viewing the others' message. The aim of this paper is to realize high level of simplicity that provides much more prompt and easy sharing of the others' opinion by auditory feedback. We conducted two experimental demonstrations: radio broadcasting programs, and presentations at an academic conference. This paper describes the results showing potential applicability of the system, and the pros and cons for the future development.
This study proposes a method of recommending colors that best match user's intuitive, sensitive and ambiguous design sensations. We focus on onomatopoeia (i.e., imitative or mimetic words such as "kira-kira" which expresses a sparkling sensation). We propose a method for quantifying perceptions and sensations expressed by onomatopoeia and then estimating the colors which are the most representative of those perceptions and sensations. Specifically, when users input graphic design material keywords and onomatopoeic words associated with their mental perceptions, our system recommends new graphic design candidates with colors that best represent the users' perceptions. Our system is expected to contribute to creative activities as they relate to an intuitive design support system.
In the field of cognitive psychology, it is known that some kinds of affects are evoked when we recognize the change of our specific body responses. Based on this knowledge, there are some studies to evoke an affect using artificial stimuli that make people feel as if their own body reactions change. On the other hand, we hypothesized that specific affects can be evoked by conscious controlling of body actions. We focused on the respiratory condition related to the feeling of tension since it can be controlled not only unconsciously but also consciously. Then we made an artwork named "Interactonia Ballon," which evokes the feeling of tension by letting participants change their respiration and making them self-conscious about their respiratory condition. The inflation of the balloon visualizes how they feel tension and amplifies the feeling. The work also provides paradoxical experience that balloon inflates by intentional holding breath and deflates by intentional exhaling breath. We exhibited Interactonia Balloon and got feedback about the work.
PukaPuCam is an application service that utilizes a camera attached to balloons, to capture users' photo continuously as a third person's view. Later on, users can glance through their photos using PukaPuCam Viewer. PukaPuCam records interactions between users and surrounding objects or people they meet. One of its features is that as balloon experience air resistance, it can change its inclination according to the user's speed. This promotes users' experience of recollecting records not normally taken. Unlike other similar devices, PukaPuCam uses one of the common design people are familiar with - a balloon; make it an interesting application at tourist spot. Since balloons are cute, we aim to give users more enjoyable, delightful experiences.
Many tactile devices have been proposed and enable us to sense and display tactile sensation precisely. However, most of these devices are large-scale and need complex calculations. For the reason, it is impossible to handle tactile systems without technical skills and tactile technologies have not disseminated. In order to spread tactile technologies to general public, we require an environment that even general people can create and share tactile content. Therefore, in this study we propose the methods for creation and sharing of "user-generated tactile content". The methods aim to construct tactile system that enables users without engineering knowledge to create and share tactile content. Moreover, based on the proposed techniques, we implement online platform for creation and sharing tactile content.
Focusing on the drawing sound as auditory feedback in the act of writing with an ordinary paper and pen, we have studied the effect of emphasized drawing sound. In this paper, we explain availability of emphasized auditory feedback of drawing sound in professional animation studio. In specific, we introduced our proposed system to animation producing process and performed a user study for 6 weeks to confirm its availability. The results from user study showed that animators used our proposed system at a rate of 93.0% during their total day and average of 5 hours a day. Moreover, we obtained the positive feedback in the interview such that they can draw dark and uniformly-thick line in quality by listening to drawing sound.
Although many researches have been performed on analyzing and classifying tactile sensations for object textures, little attention has been paid to those of interpersonal touch over the years. In this paper, as an important case study, we analyzed specialized skills of a massage called "Face Therapie^<TM>," which is described by a novel notation method "Tactile Score," using technical primitive images of the expert beauty therapist.
We have developed an MR system that merges the real and virtual worlds both audibly and visually. To achieve the audio MR, we developed the "Acoustic Planetarium," which is composed of multiple parametric loudspeakers. In this paper, we propose a method to position moving sounds using this system. Firstly, in order to select the appropriate positioning method, we compared an energy sum constant method and an amplitude sum constant method, which interpolate the sound position between multiple sounds located by the parametric loudspeakers. We also used the full MR system, which uses both audio and visual information, to verify the accuracy in perception of the moving sound. In order to do so, we displayed a CG image at the origin of the sound with the help of a head mounted display. The results showed that the energy sum constant method was appropriate for the parametric loudspeaker and the accuracy of the perceived sound was adequate for the full MR system.
This paper suggests a new force display for leg movements by placing pressure on a distal lower leg. Nine subjects evaluated the weight sensation when their distal lower leg was pressured. Muscle electrical activities were measured during the leg lift motion and were compared with that when their lower leg was pulled by a weight. The results show that the weight sensation and the activity of Biceps femoris muscle were proportional to the pressure on the distal lower leg. Placing pressure on a distal lower leg and pulling a lower leg produced similar changes in the muscle activities.
This paper introduces a method to merge real and virtual worlds in a mirror, and experiments to evaluate depth perception by motion parallax in the mirror world. Recently, the functions of a real mirror are enhanced by Mixed-Reality technique. In order to merge real and virtual worlds, conventional systems use a video monitor and a video camera as a metaphor of mirror. Therefore, it is difficult for them to reproduce motion parallax of a real mirror in Mixed-Reality world. Our method can reproduce motion parallax of a real mirror in Mixed-Reality world. A user can understand 3D position of mirror images with our method. Motion parallax is well known as one of the important stimuli to understand the depth information of the observing 3D object. However, the depth perception of motion parallax in the front-back reversal space as a real mirror world has not been clarified. The results of our subjective evaluations reveal that the motion parallax improves our depth perception, even in Mixed-Reality mirror world.
Self-assembly is a process in which components autonomously organize into a structure without external direction. In order to develop an artificial self-assembly system, an under-standing of the interaction of shape and pattern is important. In this research, we first designed a self-assembly system with components which has magnetic and concave-convex patterns. Second, we conducted some experiments about the interaction of shape and pattern on that system, applying three different shapes or patterns of component and environment. The results show that there is a relatively strong correlation between assembly-time and interaction degree we defined. Furthermore, the results suggest that the more similar to circle shapes and patterns of compo-nent and environment are, the faster assembly-time becomes.