In this paper, we consider taking tau-margin, which specifies time remaining before collision without information of speed and distance proposed in ecological psychology, into design of mobile robot. We try to reduce state-space and give robot robustness for change in speed by using tau-margin. We carried out various experiments to evaluate implemented robot. As a result, we confirmed the robot could achieve obstacle avoidance task regardless of changing in speed of the robot and complete learning of relatively complicated tasks using several robots in a short time. Finally, we confirmed acquired policy has robustness for speed of robots.
In a robotic cell production system, an assembly robot has to grasp various parts with a planned pose relative to its end effector robustly from the initial poses even with some uncertainties. For this purpose, it is necessary to design robust grasping strategy. In this paper, we propose a method to simulate multi-fingered hand grasping as a tool for designing robust grasping strategy. Acknowledging that pushing is the most primitive operation when a hand grasps a part on a flat floor, pushing operations are analyzed under the quasi-static condition. Based on the analysis, it is possible to simulate multi-fingered hand grasping and find a permissible error region of the object pose from which a planned grasping is guaranteed. Some examples of the simulation and permissible initial error regions are shown. It is also shown that experimentally obtained permissible initial error regions are almost identical to the theoretical one.
It is important for robots that act in human-centered environments to build image processing in a bottom-up manner. This paper proposes a method to autonomously acquire image feature extraction that is suitable for motion generation while moving in unknown environment. The proposed method extracts low level features without specifying image processing for robot body and obstacles. The position of body is acquired in image by clustering of SIFT features with motion information and state transition model is generated. Based on a learning model of adaptive addition of state transition model, collision relevant features are detected. Features that emerge when the robot can not move are acquired as collision relevant features. The proposed framework is evaluated with real images of the manipulator and an obstacle in obstacle avoidance.
This paper discusses experimentally how to control velocity of mobile robot as fast as possible with a condition of not letting carrying objects slip on it, while guiding the mobile robot onto a desired traveling course designated arbitrarily as preconditions. The requirements of above control objectives seem to be standing against each other. We have confirmed that our approach to this knotted problem could control the traveling speed within a velocity extent of making the carrying objects kept stationary on the mobile robot. The proposed fastest guidance control method has been examined intensively through sinusoidal course traveling and rectangular course traveling experiments.
This paper presents preliminary results and analysis on generating turning motion of a humanoid robot by slipping the feet on the ground. Humans exploit the fact that our feet slip on the ground, so this is necessary for humanoids to realize sophisticated human-like motions. To generate the slip motion, we need to predict the amount of slip. We propose the hypothesis that the turning motion is caused by the effect of minimizing the power generated by floor friction. A model of rotation by friction force is described based on our hypothesis. The proposed model suggests that the only the trajectory and the shape of the robot’s feet decide the amount of rotation by slip, and that the friction coefficient between the floor and the robot’s soles and the feet velocity do not affect the resultant angle. The case that the robot applies same force on both feet is discussed, and then we extend the discussion to the case of different force distribution. Verification is conducted through experiment with the humanoid robot HRP-2.
This report presents a formation control method for personal vehicles with two independent drive wheels and with a base for a standing human driver. Drivers on the base are allowed to control forward and backward vehicle movement by changing their COG. The proposed method assists a driver in steering the vehicle for the purpose of sustaining formations. Our approach builds formations firstly by making pairs of vehicles running side by side (C-mode) and then by connecting those pairs sequentially in a traveling direction (F-mode). During C-mode, the drivers on a pair of vehicles can enjoy conversations. A pair consists of a master and a slave. A slave obtains both relative position and relative direction from its master by applying ICP algorithm to LRF data; every vehicle is equipped with a LRF and is able to exchange LRF data taken in a shared environment. A slave can track its master other than in road environments. In our formulation, a formation is generated by individual vehicles in accordance with distributed control while the whole group is guided by a single leader under centralized control. Our experimental results show that ICP performance does not degrade even if ICP is applied in a congested formation where vehicles are separated by a small inter-vehicle distance. In a demonstration, two slave vehicles with drivers were able to steer themselves successfully in the direction of a leader vehicle. Those are the first results about application of formation control to a group of personal vehicles.
This paper proposes a reactive leg motion generation method which integrates physical constraints into its generation process. In order to react given instructions instantaneously or to keep balance against external disturbances, feasible steps must be generated automatically in real-time for safety. In many cases this feasibility has been realized by using predefined steps or admissible stepping regions. However, these predefinitions are valid only in limited situations. The proposed method considers physical constraints during its generation process. It consists of a swing leg trajectory generator and a constraint solver. The former generates a swing leg trajectory considering rough self-collision avoidance between legs. The latter does joint velocities considering joint angle/velocity limits and precise self-collision avoidance. Moreover, in order to improve the possibility of feasible patterns being generated, a stiffness varying constraint and a landing position modification function are introduced. The proposed method is validated by experiments using a humanoid robot HRP-2.