In this paper, we propose a fast motion planning method for 6 DOF manipulator based on a maze-searching algorithm. Proposed method is utilizing the CPFS (Closest Point First Search) algorithm similar to the Bug2 algorithm proposed by Lumelsky. We have modified CPFS algorithm with FAP (Footprint Avoidance Procedure) in order to guarantee that the planning does not fall into the deadlock situation. At the beginning of the planning, the initial configuration of 6 DOF robot moves straight toward goal configuration . When the path meets collision configuration, the path is generated along the boundary between free- and collision-configuration. Using computer simulation, we evaluate the proposed method with calculation time. As a result, the proposed method can generate a path with 10 times shorter than the RRT method.
This paper presents a novel approach for manipulability for grasping systems. We present a new manipulability measure to evaluate how much easily the robot manipulates the grasped object, simultaneously keeping stable grasp. For the purpose, we use operation range. The operation range is for actuator attached to every joint of robot and provides generable joint torque and velocity and their relation (between generating torque/velocity and actually usable velocity/torque). While we introduced a manipulability measure using the operation range in our previous paper, it was for a limited class due to large computational effort and we could not evaluate whole space of object velocity and could not consider whole space of external wrench. In this paper, we propose new manipulability measures which can be derived simply and can evaluate whole space of object velocity, taking the effect of whole external wrench into consideration.
Recently various autonomous mobile robots are developed for practical use. For coexistence of the robots and human in the real environment, the consideration of safety is very important. We should consider a region with a limitation of a maximum velocity of a mobile robot for the safety. In this paper, we propose path planning and trajectory generation methods for a mobile robot which moves in the environment with predetermined velocity constraints. In order to demonstrate the validity of the proposed methods, numerical simulations and experiments have been carried out.
Mapping is an activity of making a useful description of an environment. Not only geometric information such as free space but also object placements are important if the map is used for human-robot communication. We call such a map making environment information summarization because how to summarize may change depending on the goal of the mapping and the context. Environment information summarization usually includes searching for specified objects in the environment and may sometimes have to be finished with in a limited time. It is, therefore, crucial to make a good observation plan for efficient summarization. We develop an observation planning method with time constraint. Experimental results using a vision-based humanoid robot show the effectiveness of the proposed planning method.
In the past paper, we proposed the feature value amplitude control for “Visual walking” in Biped walking robot. This method presented the rotation motion tracking walk in the visual servo. The biped robot has been able to track the rotational and translational target. However, we had the problem that it took long to converge to the reference value. In this paper, we proposed Variable Gain Control to improve the responsiveness of the rotation control. We simulated the visual walking aiming at the rotational and translational target applied the proposed control. As a result, it improved the responsiveness of the feature value amplitude with the accuracy and stability. It also improved the responsiveness of the translation and rotation of the robot body.
The final goal of this work is to develop functional rubber sheets with micro rubber structures such as friction free, adhesion, and impact adsorption rubbers, etc. We report a new micro rubber 4-legged structure realizing miniaturization and integration to achieve very low friction rubber. First, flexible passive walking by the 4-legged structure is designed and analyzed. Then, we apply this mechanism to rubber structural sheets with 120-legged passive legs. The 120-legged sheet is analyzed by non-linear FEM, fabricated by micro rubber molding process and tested. In the various experiments, multi-legged passive walking by the micro rubber structural sheet is realized successfully.
A passive dynamic walking robot can walk down a gentle slope naturally without any actuator or controller, only by potential energy. Therefore, it is thought that the principle of walking exists in this phenomenon. By current researches, the passive dynamic walking has been confirmed for two and four. However, it seems that it is also important to study passive dynamic walking for more than 6 legs because there are many creatures that have more than 6 legs. In this research, a passive dynamic walking robot with more than six legs was analyzed by a simulator. As a result, it was confirmed that passive dynamic walking even for 20 legs could be achieved. Furthermore, its gait could be changed by changing the body structure. In addition, the differences of stability depend on the number of legs resulted, and this differences disappeared by changing the position of the pitch spin parts. Moreover, its stability was very high, so it could walk on the contoured ground.
Noise-robust Automatic Speech Recognition (ASR) is essential for robots which are expected to communicate with human in a daily environment. In such an environment, Voice Activity Detection (VAD) performance becomes poor, and ASR performance deteriorates due to noises and VAD failures. To cope with these problems, it is said that humans improve speech recognition performance by using visual information like lip reading. Thus, we propose two-layered audio-visual integration framework for VAD and ASR. The two-layered AV integration framework includes three crucial methods. The first is Audio-Visual Voice Activity Detection (AV-VAD) based on Bayesian network. The second is a new lip-related visual feature which is robust for visual noises. The last one is microphone array processing to improve Signal-to-Noise Ratio (SNR) of input signal. We implemented prototype audio-visual speech recognition system based on our proposed framework using HARK which is our robot audition system. Through voice activity detection and speech recognition experiments, we showed the effectiveness of Audio-Visual integration, microphone array processing, and their combination for VAD and ASR. Preliminary results show that our system improves 20 and 9.7 points of ASR results with/without microphone array processing, respectively, and also improves robustness against several auditory/visual noise conditions.
This paper proposes a method that generates motions and utterances in an object manipulation dialogue task. The proposed method integrates belief modules for speech, vision, and motions into a probabilistic framework so that a user's utterances can be understood based on multimodal information. Responses to the utterances are optimized based on an integrated confidence measure function for the integrated belief modules. Bayesian logistic regression is used for the learning of the confidence measure function. The experimental results revealed that the proposed method reduced the failure rate from 12% down to 2.6% while the rejection rate was less than 24%.
This paper discusses human skills enabling rapid adaptations to a changing environment as seen when a table tennis player hits an incoming ball and presents how to transfer them to a robot. We consider that human skills can be classified into motor skills and cognitive skills. As for motor skills, a function involving precise movement of limbs with the intent to perform a specific act (hitting a ball), we extract motor primitives from human performance data. As for cognitive skills, a function involving a meaningful response to external stimuli, we find correlation of the stimulus (the ball's state) with the timing of response (starting to swing a racket). We also present table tennis rallies with a human opponent performed by a robot to which these skills are transferred.
Human keeps high correlation among his joint motions during locomotion, and this correlation is considered to take an important role on the control of the joints and muscles, which is characterized as a redundant system. The analysis of the correlation of the recorded joint movement extracts a few simultaneously activating segmental coordination patterns, and the structure of the intersegmental coordination is attracting attention expecting its relationship with control strategy. This research proposes a method to quantitatively evaluate the intersegmental coordination as a combination of the jacobian of a few characteristic points. By applying the proposed method to the motion of 7 joints, including both limbs and trunk, this research showed that the transition of COM and rotation of ankles can represent the whole body movement.
One of the important features to understand the dynamics of humanoid robots is the absence of the link fixed in the inertia frame. When the robot has no external reaction force through the interaction with the environment, the generalized force which actuates 6 DOFs of the base-link of the robot is always zero. This constraint is related to the conservation of angular momentum which means a nonholonomic constraint, thus the motion generated by the dynamics constraint contains properties of the dynamics information of the system. In this paper, we propose an identification method based on the floating-base dynamics, when the system has no external force. Using this method, inertial parameters can be identified without force measurement, only from motion data. This paper also shows the identifiability of the proposed method for floating open kinematic chain which can actuate all the joints and change the initial values of linear and angular momentum. The method has been tested on two examples; a simple chain consisted of two links and the human body dynamics.
Recent progresses of ultrasound imaging technology have induced observations of fetal intrauterine behavior and perspective intrauterine learning. Understanding fetal behavior in uterus is important for medical cares for prenatal infants, because the intervention like “nesting” in NICU (Neonatal Intensive Care Unit) is based on a perspective of intrauterine learning. However, fetal behavior is not explained sufficiently by the perspective. In this study, we have proposed a hypothesis in which two of fetal behavior, Isolated leg/arm movements and hand/face contact, emerge from exploration by neural oscillator in spinal cord and self-organization of nervous system through tactile information in uterus. We have conducted computer experiments with a simple musculoskeleta model and a whole body fetal musculoskeletal model with tactile in uterus for the hypothesis. We confirmed that tactile information induce reflexive behaviors by the experiments of the simple model, and the fetal model with human like tactile distribution have behaved with the two behavior similar to the correspond real fetal behaviors. Our experiments indicated that fetal intrauterine learning is possibly core concept for the fetal motor development.
We propose a human-in-the-loop learning architecture which addresses the question of how learning can be achieved for tightly coupled physical interactions between the learning agent and a human partner. In recent years, the application domains of humanoid robots continue to expand, moving deeper into the realm of everyday life. Thus recent robotic developments are increasingly targeted at domestic environments and assistive tasks, in which human-robot interaction is indispensable. In order for humans and robots to engage in direct physical interaction, we employ a flexible joint humanoid robot driven by pneumatic actuators. This paper presents an example for such human in-the-loop learning scenarios and proposes a computationally efficient learning algorithm for this purpose. The efficiency of this method is evaluated in an experiment, where human care givers help an android robot to stand up.
A human partner returns a specific response after a robot acts a specific social cue. We define this as interaction rules. The partner and robot continuously search for and co-create interaction rules as inspired by social games played between an infant and a caregiver. We propose a scheme composed of “making response prediction,” “confirming response prediction,” and “habituation/dishabituation to response prediction,” and developed a robot model composed of response-predictability and response-habituation. The robot generates actions, observes the partner's response, and get to predict them. It identifies relationships between its actions and the responses, and generates actions to confirm specific responses from the partner. The interaction is reciprocated as a result. After it is habituated to the responses, it inhibits the confirmation and generates other actions. This makes a chance for other rules. We conducted experiments in human-robot interaction using a ball based on this model to investigate whether response-habituation is needed or not, while response-prediction is required by the definition of interaction rules. As a result, analysis with causality measure proved that an appropriate response-habituation supports interaction reciprocation. Various patterns of interaction emerged, such as passing the ball back and forth, rolling and catching, feint passing, and role-reversal feint passing. Response-predictability increases when an interaction is reciprocated. Then response-habituation increases and the dyad quits the reciprocated interaction and search for another rule, indicating that the scheme and the model work.
Imitation is a very complicated function which requires a body mapping (a mapping from observed body motions to motor commands) that can discriminate between self motions and those of others. The developmental mechanism of this sophisticated capability, and the order in which the required abilities arise, is poorly understood. In this article, we present a mechanism for the development of imitation through a simulation of infant-caregiver interaction. A model was created to acquire a body mapping, which is necessary for successful mutual imitation in infant-caregiver interaction, while discriminating self-motion from the other's motion. The ability to predict motions and the time delay between performing a motion and observing any correleated motion provides clues to assist the development of the body mapping. The simulation results show that the development of imitation capabilities depends on a predictability preference (a function of how an agent decides regarding the options ‘what to imitate,’ given its ability to predict motions). In addition, the simulated infants in our system are able to develop the components of a healthy body mapping in order, that is, relating self motion first, followed by an understanding of others' motions. This order of development emerges spontaneously without the need of any explicit mechanism or any partitioning of interaction. These results suggest that this predictability preference is an important factor in infant development.