Insects are capable of extremely rapid collision avoidance behaviors with a minimum of processing overhead. These features make them interacting for robotics. To implement the strategies of insect in mobile robots or cars, a through evaluation of these systems is necessary. We developed a closed-loop experimental system for analyzing insect collision avoidance behavior in virtual environments. In the current implementation, a tethered female cricket walks on a floating sphere and the rotations of the sphere are translated into movements of a “virtual cricket” in a computer-generated virtual space. Visual information including obstacle and background patterns in the virtual space are then fed back to the tethered cricket as visual stimuli projected onto a screen in front of the cricket. To induce reliably reproducible straight-line walking, we applied the male calling song that induces positive phonotaxis in female crickets. We demonstrated that the tethered female cricket displayed collision avoidance behavior in response to visual stimuli during positive phonotaxis. By employing this system, we investigated a key stimulus that triggered collision avoidance behavior in crickets in different behavioral contexts: a cricket approaching a static object and an object moving towards a quiescent cricket. The results indicate that crickets used a certain threshold of image size (visual angle) of the projected object as a key stimulus. Furthermore, we found that the threshold depended on the behavioral context: quiescent crickets started avoidance farther from the approaching object compared to crickets walking towards a static object. We conclude that behavioral context is an important factor in decision making. With our closed-loop system for behavioral analysis, we can systematically extract the conditions under which optimal behavioral performance is obtained. This will be an important step in the design of sensory processors for robots.
We developed an two-wheeled insect-sized robot to evaluate odor source localization algorithms of the male silkmoth (Bombyx mori). For fast and stable acquisition of sensory information, we used atmospheric ions and ion sensors to simulate odorant catch by insect olfactory sensors. The robot has two ion sensors, and a microcomputer for processing the sensor output, motor control and communication with external devices. To minimize its size, we used two modified servomotors as actuators, which enabled the robot to move in the same manner as a walking silkmoth. We used wheel encoders for feedback control of wheel position. The encoder output can also be used as a simulation of optic flow, which is necessary for speed and turn control in insect locomotion. By employing the robot, we evaluated the performance of two odor source localization models: one is based on the fixed programmed behavior of the silkmoth and a second model additionally includes course control dependent on the amplitude difference of ion sensor output on opposite sides of the robot. The results indicate that the robot based on the latter model was superior in ion plume tracking. Further evaluations of algorithms using our insect-sized robot will be an important tool for understanding behavioral mechanisms of orientation and applying them to robotics.
A poly vinyl chloride (PVC) gel actuator shows great potential for use as an artificial muscle because of such positive characteristics as movement in the air, large deformation, and being light in weight. A bending type actuator using PVC gel was studied previously . In order to construct an artificial muscle it is necessary to compose an actuator which has the characteristics of contraction type deformation. In this study we propose an electrode arrangement for the PVC gel actuator. Also, we investigate the characteristics of the proposed actuator experimentally and show the effectiveness of the actuator as an artificial muscle. The proposed actuator has a layered structure using mesh type electrodes. The experimental results showed that the contraction rate of the actuator was about 13%, the response rate was 7[Hz], and the output force was about 3[kPa]. The characteristics of the PVC gel actuator are shown to be very effective for constructing an artificial muscle.
We fabricated a transparent arteriole membrane model by grayscale lithography. Employed fabrication method was using a wax and polyvinyl alcohol mixture as a novel molding material for fabricating a sacrificial model. Our objective was to complement previous surgical simulators for practice and rehearsal of medical treatments. Since block vessel models cannot recreate the moderate compliance that is generally seen in real blood vessels, we proposed a method for fabricating a transparent membranous arteriole model having a circular cross-section less than 500[μm] in diameter. Here, we describe the fabrication of our model and the evaluation results of our new molding material.
A low-invasive method to record neural activity is required for safe and practical brain-machine interfaces (BMI). BMIs are expected to be used to reintegrate motor functions of physically disabled persons; however, conventional invasive methods require electrodes inside the dura mater. In this study, the authors used epidural electrodes, which are located between the skull and dura mater, to record rat neural activity for low-invasive BMI. The signals were analyzed using short-time Fourier transform, and the power spectra were classified into rat behavioral conditions by an artificial neural network and a support vector machine. The accuracy was approximately 60∼85% in two behavioral conditions classifications according to the tested electrodes' locations and frequency bands. The results indicated the feasibility of low-invasive BMI using epidural electrodes.
Imaging optics is essential for a robot vision to acquire images, since it projects the image of remote object to its imager. In particular, dynamic control of imaging optics has been desired because it enables vision to measure new information, such as depth of scene, that single robot vision with static optics cannot obtain. Conventional imaging optics, however, takes around one second to change its optical property. Thus, measurement frequency of sensing method using dynamic control of optics was limited by the slow response. In this paper, high-speed focusing vision, that is composed of high-speed vision and high-speed imaging optics based on a Dynamorph Lens (DML), is proposed. DML is a liquid variable-focus lens that can dynamically focus by morphing liquid-liquid interface in millisecond order. Since both high-speed vision and high-speed imaging optics have millisecond response time, it can change its focus and zoom ratio almost every frame. High-speed focus-switch experiment in 2[ms] was observed using a developed prototype. High-speed autofocusing in 15.8[ms] by capturing and processing 14-images at different focal position, and continuous focusing on quickly moving object by switching focal position in every 5[ms] were successfully demonstrated to validate the high-speed focusing vision.
We present a new feature “Color Cubic Higher-order Auto-Correlation (Color CHLAC) features” to recognize objects in the real world versatilely and robustly. The new features satisfy the necessary functions for the exploration of objects in a three-dimensional map. In order to search and retrieve objects in a three-dimensional map, the features should have the co-occurrence of textures and shapes, robustness for partial observations and noise, ability to adapt a widespread environment, scalability, and invariance for many transformations. We studied experiments both in a simulation and a real environment for recognition of objects, which have many kinds of shapes and textures, and then we showed that our proposed features obtain high recognition accuracy in both situations.
Most vision-based SLAM systems utilize corner-like features, and may be unstable in non-textured environments where only a few corner-like features can be extracted. To cope with this problem, we employ edge points to perform SLAM with a stereo camera. The edge-point based SLAM is applicable to non-textured environments since plenty of edge points can be obtained even from a small number of lines. The proposed method estimates camera poses and builds detailed 3D maps robustly by aligning edge points between frames using the ICP algorithm. In indoor and outdoor experiments, the method successfully built detailed 3D maps.
We propose a view-based indoor/outdoor navigation method as an extension of the view-sequence navigation. The original view-sequence navigation method uses block matching with normalized correlation for localization, which is sensitive to local illumination changes. Therefore it is only applied for indoor environment. In this paper, we adopt accumulated block matching for improving the robustness against locally changing illumination, in which a template is divided into a set of small patches, and each patch is utilized for matching. We also propose localization criteria which help a robot decide its motion. Our experimental results demonstrate the proposed methods can be applied for indoor/outdoor environments.
This paper describes a novel sound source separation method for a robot that needs to cope with dynamically changing noises in the real world. A sound source separation method, Geometric Source Separation (GSS), is promising because it has high separation performance but does not require a high computational cost. However, GSS has several issues when applied to real-world applications such as robot audition systems that are used in dynamically changing environments. To improve performance in dynamically changing environments, we propose two effective techniques. One is Adaptive Step-size control (AS) this adaptively sets the step-size to the optimum value. The other is Optima Controlled Recursive Average that improves the precision of an estimated separation matrix, and thus achieves high separation performance. We evaluated GSS with and without our proposed methods using an 8ch microphone array embedded in Honda ASIMO. Experimental results showed that the proposed methods improved GSS performance in dynamically changing environment.
This paper presents a new method based on independent component analysis (ICA) for enhancing a target source and suppressing other interfering sound sources, supposed that the latter are known. The method can provides in a reverberant environment a barge-in-able robot audition system; that is, the user can talk to the robot at any time even when the robot speaks. Our method separates and dereverberates the user's speech and the robot's one by using Multiple Input ICA. The critical issue for real-time processing is to reduce the computational complexity of Multiple Input ICA to the linear order of the reverberation time, which has not been proposed so far. We attain it by exploit the property of the independence relationship between late observed signals and late speech signals. Experimental results show that 1) the computational complexity of our method is less than the naïve Multiple Input ICA method, and that 2) our method improves word correctness of automatic speech recognition under barge-in and reverberant situations; by at most 40 points for reverberation time of 240[ms] and 30 points for 670[ms].
Human-robot interaction through music in real environments is essential for robots, because such a robot makes people enjoyable. To deal with real music signals by using robot's own ears, we propose a beat-tracking algorithm for a robot based on semi-blind independent component analysis (SB-ICA) and spectro-temporal pattern matching (STPM). SB-ICA suppresses a self-generating sound such as singing or scatting which heavily affects beat-tracking due to its periodicity. STPM provides quick adaptation to beat changes because it is able to use a shorter matching window than conventional beat-tracking methods based on self-correlation functions. We thus developed a music robot which steps, sings, and scats according to musical beats based on the proposed beat-tracking method. The experimental results using the music robot showed highly noise-robust beat-tracking even when the robot was singing or scatting, and quick adaptation to beat changes like a human clapping sound whose tempo is always changing.
This paper proposes a continuous vowel imitation system that explains the process of phoneme acquisition by infants from the dynamical systems perspective. Almost existing models concerning this process dealt with discrete phoneme sequences. Human infants, however, have no knowledge of phoneme innately. They perceive speech sounds as continuous acoustic signals. The imitation target of this study is continuous acoustic signals including unknown numbers and kinds of phonemes. The key ideas of the model are (1) the use of a physical vocal tract model called the Maeda model for embodying the motor theory of speech perception, (2) the use of a dynamical system called the Recurrent Neural Network with Parametric Bias (RNNPB) trained with both dynamics of the acoustic signals and articulatory movements of the Maeda model, and (3) the segmenting method of a temporal sequence using the prediction error of the RNNPB model. The experiments of our model demonstrated following results: (a) the self-organization of the vowel structure into attractors of RNNPB model, (b) the improvement of vowel imitation using movement of the Maeda model, and (c) the generation of clear vowels based on the bubbling process trained with a few random utterances. These results suggest that our model reflects the process of phoneme acquisition.
In this paper, we propose a method for simultaneous learning of multi-modules for joint attention: gaze-driven attention and word-driven attention. Inspired from child language acquisition, mutually exclusivity bias is utilized for mutual facilitative learning both in an intra- and inter-module manner by extending a modified Hebbian learning rule. Experiments on a human-robot interaction and on the computer simulations, we analyzed that the proposed method enabled mutually facilitative learning of a mapping for gaze-following and a label-to-object mapping by which the learner performs multimodal joint attention with its caregiver. Finally, through a computer simulation resembling mother-infant interaction, we argued a possibility of the proposed learning mechanism as a constructivist model for infant's cognitive development.