This paper proposes a method to design the layers of Subsumption Architeture as objects. In this architecture, layers communicate with each other by wires. Generally, it is very difficult to design a connection of wires that enable a robot behave as we intend to. In this architecture, bugs in lower layers are assumed to be fixed, so it is possible to pile up upper layers on the lower layers. However, this connection of complicated wires, so-called a“spaghetti structure” makethe bug fix difficult. While if we design layers independently without wires, although the bug fix becomes easier, it is obviously difficult to make a robot behave as we intend to. Therefore, we propose a method to design object-oriented modules and layered structures of Subsumption Architeture. With this method, not only the bug fix becomes easier but also enable a robot execute planned tasks. In this paper, we apply this method to navigation tasks and show simulations and experimental results of it.
Dexterous dynamic actions will be very important for future humanoid robots to support human tasks in unstructured environments. Such skills are difficult to achieve by the current standard control strategy for humanoid robots based on the asymptotic convergence to the successive desired states. An alternative approach would be to exploit the natural interaction dynamics between the body and the environment, and to navigate through multiple dynamics by imposing least control in order to robustly reach the goal state. As a first example of such a strategy, we propose and investigate a “Roll-and-Rise”motion. It is a fully dynamic whole-body task including underactuated motion whose state trajectory is unsolvable, and unpredictable perturbations due to complex contacts with the ground. First, human strategy is analyzed by motion capture experiments. It suggests a non-uniform control strategy which focuses on sparse critical points in the global phase space, and allows deviations and trade-offs at other parts. Then, the dynamics of the Roll-and-Rise motion is analyzed using simplified models and simulations to confirm the above findings. Finally, experiments with a real adult-size humanoid robot are successfully carried out. The robot rose from a flat-lying posture to a crouching posture within 2 seconds. Collected data is analyzed to show that uniform measures such as overall trajectory difference do not separate successes from failures, but a localized measure called the energy input timings does.
This paper discusses a fully decentralized algorithm able to control the morphology of a modular robot, consisting of many identical modules, according to the environment encountered. One of the significant features of our approach is that we explicitly exploit an“emergent phenomenon” stemming from the interplay between control and mechanical systems in order to control the morphology in real time. To this end, we particularly focus on a“functional material”and a“mutual entrainment”among the nonlinear oscillators, the former of which is used as a connection mechanism between the modules, and the latter of which plays as the core of the control mechanism for the generation of locomotion. Preliminary simulation results indicate that the proposed algorithm can induce“protoplasmic streaming, ”which allows us to successfully control the morphology of modular robot in real time according to the situation without losing the coherence of the entire system.
This paper considers a method of estimating a contact point between a robot hand and an object by using a force sensor with measurement noise. The estimation problem is formulated as a problem of nonlinear constraint optimization, in which the cost function represents the estimation error against the measurement noise and the constraints require the contact point being on the fingertip surface and the fingertip force directing inward. Although problems of nonlinear constraint optimization can be only solved by numerical optimization in general, it is shown that the optimization problem for the estimation can be solved analytically in this case. Experimental results are shown to prove the efficiency of the proposed method.
This paper presents a novel learning method for nonlinear mapping between arbitrary dimensional spaces. Unlike artificial neural nets, GMDH, and other methods, our method doesn't require complicated control parameters. Providing a feasible error threshold and training samples, it automatically divides the objective mapping into partially linear mappings. Since decomposed mappings are maintained by a binary tree, the linear mapping corresponding to an input is quickly selected. We call this method Partially Linear Mapping tree (PaLM-tree) . In order to estimate the most reliable linear mappings satisfying the feasible error criterion, we employ split-and-merge strategy for the decomposition. Through the experiments on function estimation, image segmentation, and camera calibration problems, we confirmed the advantages of PaLM-tree.
Robot audition is a critical technology in creating an intelligent robot operating in daily environments. To realize such a robot audition system, we have designed a missing feature theory based interface between sound source separation and automatic speech recognition (ASR) . In this interface, features distorted by speech separation are detected from input speech as missing features. The detected missing features are masked on recognition to avoid severe deterioration of recognition performance. By using the interface, we developed the robot audition system which recognizes multiple simultaneous speech. We also assess its general applicability by implementing it on three different humanoids, i.e., Honda ASIMO, SIG2, and Replie of Kyoto University. By using three simultaneous speeches as benchmarks, its general applicability was confirmed. When triphone is used and a size of vocabulary is 200 words, the average word correct of three simultaneous speech are 79.7%, 78.7%, and 82.7% for ASIMO, SIG2, and Replie, respectively.
This paper studies the real-time gait planning for a humanoid robot. By simultaneously planning the trajectories of the COG (Center of Gravity) and the ZMP (Zero Moment Point), the fast and smooth change of gait can be realized. The change of gait is also realized by connecting the newly calculated trajectories to the current ones. While we propose two methods for connecting two trajectories, i.e. the real-time method and the quasi-real-time one, we show that the stable change of gait can be realized by using the quasi-real-time method even if the change of the step position is significant. The effectiveness of the proposed methods are confirmed by simulation and experiment.
This paper provides a cooperative work system between human and manipulator using hand gesture instructions. The goal of our system is that the manipulator can work with humans in the same workspace according to the instructions of hand gestures. Our cooperative work system is composed of the manipulator, a PC and a trinocular stereovision hardware. Since the system has to recognize the position and posture of the hand in the three dimensional workspace, we use the trinocular stereovision hardware. The three dimensional positions of the hand and object are obtained through range images. The gesture is recognized based on the length and width of the hand. We propose the method in which the hand area is divided into two blocks in order to recognize the hand gesture rapidly. Through experimental results, we will show the effectiveness of our system.