This paper addresses mutual collision avoidance between multiple mobile robots based on a layered strategy. In this strategy, static motion generation and dynamic motion generation of several levels are provided, and a proper level of dynamic motion generation is selected for mutual collision avoidance according to the complexity of the situation. We have implemented two typical methods in the layered strategy, which are rule-based local collision avoidance and negotiation-based global one using communication. In each method, a robot detects collision and applies as a local method as possible. Experimental results show two actual mobile robots can achieve mutual collision avoidance based on the layered strategy.
Human works like handling objects, carrying loads, assembling parts, and cutting parts, are completed by the combination of both leg locomotion and hand manipulation, in the various fields of construction, agricultural fields, factory, and home. The authors propose a new concept for the integration of locomotion and manipulation, “the Integrated Limb Mechanism.” The concept covers the technical fields both of the control integration and of the mechanism integration necessary for the practical working robot. Here, a six-bar linkage mechanism with four degrees of freedom is introduced as an example of integrated limb mechanism with both advantages in leg and in arm with consideration for mechanical design. Computer simulation with the dynamic model is performed to verify the transformability from leg posture into arm posture of the mechanism. The kinematic analysis by the manipulatability ellipsoid is done to evaluate the different motion rates between the leg posture and the arm posture. Some useful tasks are considered suitable to the mechanism.
This paper proposes a distributed intelligent controller for multi-legged walking robots. Since living things have nerve center to control basic exercise motion, walking robots should also have such distributed low level controllers to govern the fundamental local motion. The proposed controller tries to simulate the architecture of the nerve center; its control structure consists of multiple controllers, where each controller corresponds to each leg and it determines the motion of its leg by common information and its own local information. The central controller needs not to consider the detail of each leg motion; it only does the global path planning and adjust the total synchronization. Distributed intelligent Control Structure is like a intelligence cluster and consists of Intelligent Clock Generator (ICG), Distributed Intelligent Controller (DIC) and Common Data Field (CDF) . Distributed intelligent control structure generates some gaits naturally. It is also easy to add other function to this system. This system does not depend on number or structure of robot's leg. This paper describes the details of each functions of controllers, and explains that the whole system works well only by joing these functions. Several numerical simulations verify the validty of the proposed control structure.
This paper presents a new environmental model for real time robot navigation, called T-Net (Target Network) . The T-Net, which consists of straight paths, represents rigid environmental structure and keeps relations between the local areas. In this model, the robot memorizes the path with a pair of feature points and can move precisely by tracking these feature points. The most important feature of the T-Net acquisition is that the robot can approximate a local area uniquely with a straight path. We have applied this idea into our mobile robot that has Multiple Vision Agents, and verified it through simulations and experimentation in a real world.
This paper deals with the real-time recognition of six basic facial expressions. In order to obtain the center position of both pupils, we obtain the brightness, by using a CCD camera, along a vertical line crossing over the pupil and eyebrow as base data and calculate the cross-correlation between base data and that in the given image. We extract the position of right and left pupils separately. By using transputer, the time needed is about 40 [ms] to obtain right and left pupil's position. As the facial information for utilizing the recognition of facial expression, we use brightness data of 13 vertical lines (facial information), determined empirically and including the areas of eyes, eyebrows and mouth. Then we acquire the facial information of 6 basic facial expressions for 30 subjects whose face images have already been obtained. Since we use a layer-type neural network for recognition of facial expressions, facial information for some of 30 subjects is used for training the neural network and recognition tests done by using facial information not used for neural network learning. We find that, when we use 15 subjects for the network training, the correct recognition ratio reaches 85%, and the total time for detecting right and left pupil positions plus the recognition of facial expression is about 55 [ms] per one recognition cycle.
This paper addresses the problem of mutual collision avoidance of mobile robots which work in an uncertain indoor environment. A local-information-based approach for collision-free-motion planning is proposed for the development of an autonomous and flexible multi-robot system. In order to reduce communication application, function of action observation is introduced into a robot. The robot is able to reason the space structure about its blind spot area by observing actions of other robots. It then plans its motions based on the results of space understanding and action observation. The effectiveness of this approach to collision avoidance is verified with simulation experiments.
We aim at handling an unknown object by cooperating multiple mobile robots consist of various moving costs and load capacities. We focus on the problems of determining an appropriate grasping arrangement prior to handling operations. This method is proposed to avoid bad situations, such as making an object fall down, or some robots overloaded in handling. The algorithm we propose in this paper includes two optimization problems; one is the “determination of initial arrangement, ” the other is the “determination of final arrangement.” The difference between these optimizations is that the mass center position of the object is recognized or not. In the first optimization, the robots are placed where the possibility that robots can lift the object with no trouble. In the second optimization, the penalty index is defined to minimize the energy the system consumes and to maximize an index of stability. We have confirmed that the robots moved to the optimal arrangement in computer simulations.
In this paper, we propose a new design concept of manipulator which is called “Torque-Unit Manipulator (TUM) ”. The Torque-Unit Manipulator has an open-loop kinematic chain as the typical traditional manipulators do, however each joint of which has one degree of freedom free joint. An actuator called “torque unit” is mounted on an arbitrary place of the link. The merit of the TUM would be an easy maintenance. In this paper, the dynamic model and the position-controllability of the N degree-of-freedom general TUM are shown in detail. A control system is designed for the TUM and its effectiveness is shown by a simple simulation.
This paper describes a feature based stereo matching method which enables matching of occlusion boundaries using normalized light intensity. In many feature based stereo methods, property constraint is used to reduce the number of matching candidates. This constraint, however, can not be used at occlusion boundaries in case that background intensity of an occlusion boundary in one image is different from that in the other image, because the difference makes the different property values. A foreground object can be found in two images. Thus, an occlusion boundary can be matched as the boundary of the foreground object, if intensity of either region separated by the boundary in one image is similar to that in the other image. However, an object can be observed as the different intensities in both images by influences of lighting effects and the different camera characteristics. To solve this problem, at first, light intensity is normalized so that intensities derived from matching edges, which are obtained from texture edges, may be identical between images. Next, edges at occlusion boundaries are matched with using the normalized intensity.
This paper proposes a method to combine two simple manipulators so that they make a redundant chain for performing a task requiring accessibility from overall directions. For tracing a welding seam on a small-sized object, a tool like a torch or nozzle is needed to access the object widely from any direction in a limited area. However, a single manipulator is difficult to bring about the accessibility. Therefore, the combination of the two manipulators is important. Control variables of rotational and prismatic joints expressed with link parameters of the chained manipulators are discussed for solving. Actually, the task of tracing a curve on a bulb with a certain angle is treated for simulating a welding of a golf head which is roughly welded with three curved pieces. Also, the calculation of the control variables are shown to be valid by the results of the simulation and the observation of experimental demonstration.
This paper studies on the learning control problem of assembly/disassembly task sequence. The proposed learning control system has two sub systems. One of them is a supervisory control system which can realize the implicit specifications such as equipment-related or task-independent specifications. The other one is a learning automata system which can realize the explicit specifications such as task-dependent specifications. This approach allows the flexible design to accommodate frequently changing manufacturing requirements. Some simulation results to verify the effectiveness of the proposed system are shown.
The ability to achieve desired contact states between robots and objects, and to recognize their achievement on-line is necessary for reliable task execution in the presence of uncertainty. It also enables reliable off-line teaching and put it in practical use. Three basic motions to achieve basic contact states are defined and a control algorithm and a reliable recognition algorithm using force/moment sensor data for each motion are proposed. The combination of these basic motions realizes various complex contact states. That means general recognition and achievement of contact states are realized by combination of basic motions. The ability to achieve desired contact states enables the off-line teaching system to teach only desired contact states, so that small errors between actual parts' positions and those in the environment model do not influence task execution. An off-line teaching system equipped with commands to execute basic motions and an on-line control system with the ability to control force for execution of basic motions and to recognize their achievement are developed. The off-line teaching of small tasks and their execution by acutal manipulator are presented.
In this paper, a realtime control of dynamic biped locomotion using sensor information is investigated. We adopted an ultrasonic range sensor mounted on the robot to measure the distance from the robot to the ground surface. During the walking control, the sensor data is converted into simple representation of the ground profile in realtime. We also developed a control architecture based on the Linear Inverted Pendulum Mode which we had proposed for dynamic walking control. Combining the sensory system and the control system enabled our biped robot, Meltran II, to walk over the ground of unknown profile successfully.
In this paper, characteristics of a couple of model following controls is considered. One is ordinary control to follow the states of the model (model following control) and the other one is the control to follow the behaviour of the model (model-parameter following control) . To indicate the differences, two sliding mode controllers are introduced. As a result, model-parameter following sliding mode controller, controller based on a disturbance observer, and learning controller belong to the latter. By this classification, it is shown that the controller based on a disturbance observer rejects the disturbance approximately and rejects it completely if and only if the disturbance is constant. Moreover a new gain to improve the performance of the controller based on a disturbance observer is indicated. Here the performance is a measure of the time it takes to reject the disturbance (restrict the system onto a hyper plane) . To clarify the comparison between the new and the original gains, an LQ evaluative function is offered. Some numerical simulation results are presented.
In this paper, we consider a Free-Flying space Robot (FFR) on which no external forces or torques are exerted. We propose to control the orientation of the base of FFR by using only motion of the robot joints. In this sense, we define a new controllability concept called “densely controllable”. A sufficient condition is provided to identify whether a robot system is densely controllable. Based on these, we propose a attitude maneuver algorithm for FFR. Computer simulations demonstrate the applicability of the algorithm.