This paper presents a method for obtaining an environment model by analyzing a record of behaviors of a robot. To analyze the record of behaviors, we use a statistical approach. The robot analyzes the record of behaviors, while wandering in an environment by behaviors such as approaching to a feature and leaving from a feature. The robot detects characteristic behaviors from the record. This method does not refer a strategy which depends on structure. Therefore, our method can apply to a complex environment in which the robot cannot move along walls.
The new motion control system described in this paper has an event-driven motion-module switching mechanism. This mechanism can select a previously prepared motion-module for each event generated from sensor information and can modify a reference input in real-time. This highly modular and extendable motion-compensating mechanism, especially effective in robot tasks with uncertainties and in robot motion that requires skill, should be useful for such robot tasks as machining and assembling. This paper describes the concept and implementation of the proposed system and presents some experimental results demonstrating its feasibility.
In order to develop “Active Human Interface” that realizes “hear-to-heart” virtual communication between intelligent machine and human being, we have already reported the “Face Robot” that has a human-like face and can display facial expressions similar to those of human being by using the flexible microactuators (FMA) . For realizing a real-time communication between them, we think that the face robot needs to express facial expressions at almost the same speed and in same manner as in human being. However we found that it is impossible to do this by using FMA. Then this paper deals with the development of new mini-actuator “ACDIS” for the real-time display of face robot's facial expressions and also their control method. We develop double action piston type actuator. For the measurement of displacement of a position in ACDIS which is essential for controlling the facial expression of the face robot, we equip a LED and a photo-transistor inside the ACDIS; and by measuring the output voltage of the photo-transistor, we can measure the displacement of the piston movement in ACDIS. The opening time of electro magnetic valve is controlled for the displacement control of ACDIS by comparing the present position and velocity with target ones. We undertake the real-time facial expression display experiment and confirm that the human-like-display of facial expression on the face robot is successfully realized.
We show how an autonomous mobile robot can acquire the optimal action through the interaction with the real world. We propose a new architecture using the hierarchical fuzzy rules, fuzzy evaluation system and learning automaton. By using our proposed method, the robot acquires how to approach the goal avoiding a moving obstacle, using the steering and velocity control inputs, simultaneously. We also show the experimental results to confirm the feasibility of our method.
We proposed a nonholonomic manipulator, which is a controllable n-joint manipulator with only two inputs, exploiting a special kind of velocity transmission, called a nonholonomic gear. Since the nonholonomic manipulator was theoretically designed from the viewpoint of kinematic constraints and nonlinear control, mechanical implementation and prototyping are extremely important in practice. In this paper, the principle of mechanical design of a nonholonomic manipulator is established, and the experimental results are shown using a prototype nonholonomic manipulator.
Kinematically redundant manipulators have a number of potential advantages over current manipulator design. For this type of arm, some control methods through pseudoinverse and nullspace of Jacobian matrix had been suggested. Questions associated with them are about the difficulty of real-time control because of voluminous operations for deriving the pseudoinverse matrix by singular value decomposition. In this paper new formulation and control methods for solving redundancy in manipulator arms by locally optimizing a kinematic and/or dynamic criterion are presented. Formulation is performed by means of a joint decomposition technique, resulting in a particularly efficient computational scheme which is feasible for real-time control. Control method based on this formulation is given in low computational cost to compare with conventional control methods.
A total computer-aided design system for robot manipulators “TOCARD” has been developed. This system determines all design parameters of a robot mechanism -not only the fundamental mechanism (degree of freedom, joint types, arm lengths and offsets) but also the inner mechanism (motor allocations, transmission mechanisms, motors, reduction gears, arm cross-sectional dimensions and machine elements) . Analyzing the relationship between these parameters and design evaluation functions made it clear that the fundamental mechanism is tightly connected with kinematic functions and the inner mechanism with static/dynamic functions. Accordingly, the design procedure of “TOCARD” consists of three local optimization stages to make the robot design efficient. 1) The fundamental mechanism is designed based on kinematic evaluation such as workspace, effective degree of freedom, joint displacement, velocity and acceleration, and workppece velocity and acceleration. 2) The motor allocations and transmission mechanisms are determined, and the arm cross-sectional dimensions and machine elements are calculated roughly, based on simple evaluation of dynamics-total motor power, total weight, deflection and weight capacity. 3) The arm cross-sectional dimensions and machine elements are modified based on precise evaluation of dynamics including natural frequency. “TOCARD” also has a robot simulator and the data base of machine elements.
We developed air conditioning equipment inspection robot with vision sensor. Because the resent high-rise building has many air diffusers for air conditioning, inspections of these diffusers are very hard work for human labors. Then our robot is expected to do this hard work to substitute for human labors. This robot has a CCD camera with pantilt table and wheels controlled by computer. So the robot can find out air diffusers by CCD camera and move to the air diffuser automatically. After the robot reaches to the diffuser, the robot lifts up air volume sensor system to the ceiling in order to measure the air temperature and the air volume of the diffuser. In this paper, we state on our proposed recognition method of diffusers, robot navigation based on ceiling landmarks, and the sensor system which measures air volume flowed out from the diffusers. In experiments, we compare the data measured by human labors with the data by our developed robot, and experimental results prove that our robot is enough to substitute for human labors.
We propose an accurate and efficient method for detecting potential collisions among multiple objects with arbitrary motion (translation and rotation) in three-dimensional (3-D) space. The algorithm can be used directly for both convex and concave objects. The method consists of two main stages. In the first, coarse stage, an approximate test is performed to identify interfering objects in the entire workspace using octree representation of object shapes. In the second, fine stage, polyhedral representation of object shapes is used to more accurately identify any object parts that might cause interference and collisions. For this purpose, specific pairs of faces belonging to any of the interfering objects found in the first stage are tested, thus performing detailed computation on a reduced amount of data. Experimental results, which demonstrate the efficiency of the proposed collision detection method with adequate octree are given.
We propose a simple visual servoing based on linear approximation of the inverse kinematics. When we use a hand-eye system which has a similar structure as a human being, we can approximate the transformation from a binocular visual space to a joint space of the manipulator as a linear function. This relationship makes it possible to produce the desired joint angles from the image data using a constant linear function instead of the variable nonlinear image jacobian and robot jacobian. This method is robust to the calibration error, because it uses neithor camera angles nor joint angles. We show some experimental results which demonstrates the effectiveness of this method.
Controllability of a manipulator with a passive joint which has neither an actuator nor a holding brake is investigated. The manipulator has 3 degrees of freedom in a horizontal plane and the third joint is passive. It is shown that the dynamic constraint on the third link is 2nd-order nonholonomic. The controllability is proved by constructing examples of the input trajectories from arbitrary initial states to arbitrary desired states. The proof is intuitively understandable, and the construction of the input directly leads to the trajectory planning. Simulations show that the manipulator can reach the desired position and velocity by the constructed input.