The mobility-aid is important for the people with severe disability, especially, the child with sever cerebral palsy. In this article, interactive learning system between human and robot is applied for an electric powered wheelchair of the person with severe cerebral palsy. The user, that is, human, understands how to control wheelchair whose input signal is an acceleration data attached at right foot with feeling wheelchair' movement. The robot, that is, wheelchair, understands user's obscure foot movement using a self organizing map. The effectiveness of proposed interactive learning system was evaluated by the experiments with 10 healthy subjects and a child with severe cerebral palsy.
In this research, we analyze a formation control of a multi-robot system while considering the limitation of communication range for each robots. With limitation in communication range, it is important to control the communication network topology of a multi-robot system in order to keep the network's connectivity. We formulate the multi-robot system as a hybrid dynamical system, with the communication constraints by inequality. Then we study the algorithm to solve a mixed integer programming problem that is equivalent to the optimal formation control probrem of the system. Computer simulation of the formation control of three robots is perform, in which a optimal control and a receding horizon control is done to assure the validity of our approach.
We have developed a kitchen in which a robot arm with a hand is incorporated. The robot was made for assisting with our work in kitchen. Manipulation of a variety of dishes is the main problem for the purpose. For realizing the manipulation, we developed a one-DOF hand equipped with multiple sensors. In this paper, we present the robot system estimating aproximate position and shape of dishes using these sensors based on sensor-based tracing. We also present a method of manipulating dishes grasplessly to deal with difficult setting of dishes. In the following experiment, we demonstrate the robot clears away basic three types of dishes by picking them off a tray and placing them into a dish washer.
Generating natural body gestures is one of the essential functions in human-robot interaction. Various approaches, such as motion database approach and on-line planning, have been developed. However, these approaches cannot adapt to user's interruption, which usually occurs. Therefore, we propose a novel method for generating motions flexibly and immediately. The proposed method integrates Motion Graph and Probabilistic Roadmap Method. The former helps generation of human-like motion, while the latter helps rapid reaction due to user-intended interruption. We conducted the experiments to verify that the proposed method can maintain naturalness of body gestures, which we define based on key-poses and velocity profiles of the original motion. In addition, we implemented the proposed method on the android robot Actroid-SIT as a human-robot interaction system. Through subject experiments, we confirmed that the proposed method can make human-robot interaction more smooth and durable.
In this paper, the autonomous navigation system which traverses in complex urban area without prior environmental information. If very large scale information (e.g., scanned three dimensional points cloud) on the route were known priorly, autonomous navigation would be easy. However, if robots had to move long range in the area where is out of control, quite few information could be given previously. In ''Tsukuba Challenge'', it is allowed that capturing any kind of information on the route beforehand and using it on the challenge day. Since autonomous navigation system is valid in realistic situation, we try not to use such prior information for the navigation system. Therefore, we propose an autonomous navigation system which only uses sensory data valid in real situation, e.g., positions from GPS, attitude from IMU, visual images and LRF data. Especially for visual images and LRF data, we only use them for on-line processing, or we handle them as one of the general information. Road boundary estimation and speed control were successfully executed in the presence of many pedestrians, other robots, and variance of road shape. The effectiveness of the proposed system is proved through the final result of the ''Tsukuba Challenge 2010''.
Recently, the development of the support robot for daily life has been required under the impact of falling birthrate and the aging population. We have developed a meal assistance robot for disabled or elderly person who cannot take a meal by themselves. In this paper, a system of chopsticks-equipped meal assistance robot trial made is introduced. Chopsticks have a variety of movements for foods handling and are familiar to people in Japan and other Asian countries. The grasp force control of chopsticks for the variety of foods by using the elasticity and maximum force information, is also described.
Humans can perform fast and skillful manipulations using various parts of the body by effectively utilizing the dynamics of the targets. Visual sensation is the most important human sense used for such manipulations. It is obviously preferable that robot can perform various tasks using same end effectors like a human hand and using visual information. Juggling is one such example involving skillful and dynamic manipulations, and visual information is essential for it to be successful. Previously, there have been several studies about robotic juggling. However, none of these studies have considered cases in which a human-like multifingered hand-arm is used for the robotic juggling. The purpose of this study is to achieve two-ball juggling using our robotic hand-arm, which has three general purpose fingers, and stereo vision. The trajectory of the robotic hand-arm is generated based on the ball's estimated dropping position and moment, and the robot catches the ball. The juggling motion is achieved by repeating this cycle.