Most conventional visual feedback systems using CCD camera are restricted by video rates and therefore cannot be adapted to the changing environment sufficiently quickly. To solve this problem we developed a 1 ms visual feedback system using a general purpose massively parallel processing vision system in which photo-detectors and processing elements are directly connected. For high speed visual feedback fast image processing algorithms are also required. In particular, the difference of images between frames is very small in our system because of its high speed frame rate. Using this feature, we can realize several image processing techniques by simpler algorithms. In this paper we propose a simple algorithm for target tracking using the feature of high speed vision, and realize target tracking on the 1 ms visual feedback system.
The purpose of this study was to investigate avoidance motion of man and its applicability to a mobile robot. The avoidance motion includes characteristics of passing of humans. These characteristics were obtained from the results of the following experiments. Firstly, many human passing motions were recorded by VTR to analyze human avoidance motions under natural settings on the road. The analysis of the VTR recordings revealed that there were three types of human avoidance motion in passing. The most frequent type observed was as follows: The subject returned to the original locus he had before starting avoidance motion, after the subject had finished passing. Then, a human's passing experiment was conducted in a laboratory using the motion type for constructing an algorithm there of. The results showed that human avoidance motion in passing has two main characteristics; (1) the locus agrees well with the catenary, (2) the walking speed is constant in passing. A basic human avoidance algorithm was constructed using these results. Finally, a mobile robot having a distance measuring system with ultrasonic sensors was developed. The system detects relative distance and relative velocity of man, and the robot realizes passing motion using the avoidance motion algorithm.
A practical and broadly applicable path-location method is proposed for robotic welding, deburring, etc. that uses range data from the manipulated object's surfaces. The method locates the path after low-level processing to eliminate noisy parts in the range data. This method is based on a newly developed algorithm for fitting a partial object shape to prototype model information; this algorithm treats the sensor data of the 3-D surface shape adjacent to the path suitably for the sensor mechanism. The path can thus be located even if the object surface consists of several curved elements. This method is applicable to actual robotic environments where the range data is noisy, the objects have individual differences between objects, and the objects are displaced during manipulation. The derived shape and location of the surface adjacent to the path are useful for various kinds of manipulations including welding and deburring.
This paper describes a stereo matching algorithm for occluding contours in a cluttered background. It is generally difficult to match occluding contours in stereo images because the edge points of occluding contours may not be extracted or the contrasts of the corresponding edge points are not always similar. The candidate region of an occluding contour is determined from the occluded region using a geometric constraint. In the first stage, edge points are extracted in the candidate region. For both the left small region (called a window) and the right window of an edge point, the possible regions in the other image are searched for the matching window respectively. If both the left and right windows are well matched, the edge point is determined to be the initial point on the occluding contour. In the second stage, the remaining part of the occluding contour is acquired. The disparities of the points in the remaining part are interpolated by using those of the known occluding contour points. A candidate point is supposed to be on the occluding contour, and then the degree of the matching is computed with the corresponding point, whose position is determined by the disparity, in the other image. The occluding contour is acquired by connecting the candidate points which optimize the criterion function in terms of the degree of matching and the smoothness of the occluding contour. Experiments have been executed for artificial and real images to verify this algorithm.
The planar positioning of an object using a camera is an important technique for minute manufacturing. And detecting a feature in an image is an essential subject for it, thus research has been actively pursued in this area. Template matching is a useful method of detecting a feature in an image. It doesn't require any complicated settings, and we can use it more easily than other methods (centroid determination in binary image processing, etc.), by an equipment on the market. However, template matching is poor at detecting the rotation of a feature, and the amount of calculation is large. In order to solve these problems, we propose a new method of detecting the translation and rotation of a feature by the use of coarse optical flow. The coarse optical flow is acquired by the differences of the intensity between an objective template and the observed template, and the gradient of the intensity at each pixel in the template. This method is as simple as conventional template matching. Furthermore, it provides sub-pixel accuracy. Then changing the image resolution from coarse to fine makes it possible to reduce the amount of calculation. We show some experimental results of a precise planar positioning.
Impedance control is one of the most effective control methods for a manipulator in contact with its environments. The characteristics of force and motion control, however, are determined by the impedance parameters of the end-effector of the manipulator which must be designed according to the given task. In this paper, we propose a method to regulate impedance parameters of the manipulator's end-effector while identifying the characteristics of the environments using neural networks through on-line learning. Four kinds of neural networks are used: three for the position, velocity and force control of the end-effector, and one for the identification of environments. First, the neural networks for the position and velocity control are trained during free movements. Then, the neural networks for the force control and the identification of environments are trained during contact movements. Computer simulations show that the method can regulate stiffness, viscosity and inertia parameters of the end-effector and identify the unknown property of the environments through on-line learning.
In this paper we propose a new vision guided control method to move a local object on just the same trajectory of a remote object movement using their images. Our objective is: Having a motion image of a remote object movement which is transmitted or recorded, we intend to control another local object having the same shape to replay just the same movement by referring the remote image. We realize this control by visual servoing based on the image of the local object taken by a second local camera at arbitrary position. Of course, each camera has respective own characteristics, so that the direct comparison of both images makes no sense. In this case, the two remote and local cameras seeing different but same shaped respective objects are considered to configure a “pseudo stereo” system, if both movements are just the same. In stereo system, the corresponding point pairs on respective images must hold so-called epipolar conditions. Therefore, by controlling the local object motion so that its local image points must satisfy this condition, we can make the local object to track the trajectory of the remote object movement. Experimental results promises applications of virtual replay of operations at remote location or at intervals of long time using their images.
In this paper, we verify the effectiveness of virtual joint model, which is one of the lumped parameter model, for the flexible manipulators. First, the outline of the virtual joint model approch is given, then a planer 2-DOF (degree of freedom) flexible manipulator is modeled by using virtual joint model. Second, the same manipulator is modeled by using distributed parameter model. Then experiments and simulations using the both models are performed. The effectiveness of virtual joint model is verified by comparing the dynamic behavior of each model with that of the real arm.
This paper presents an evaluation of joint configurations of a robotic finger based on kinematic analysis. The evaluation is based on an assumption that the current control methods for robotic fingers require that the contact state specified by the motion planner be maintained during manipulation. Various finger joint configurations have been evaluated for different contact motions. In the kinematic analysis, the surface of the manipulated object was represented by B-spline surface and the surface of the finger was represented by cylinders and an ellipsoid. The solutions for inverse kinematics of manipulation which gives the finger joint displacements for a given displacement of the object is described briefly. Three types of contact motion, namely, 1) pure rolling, 2) twist-rolling, and 3) slide-twist-rolling are assumed in this analysis. The manipulation capability of the fingers is evaluated for each contact motion, and the finger joint configuration best suited for manipulation as determined by the size of manipulation workspace. The evaluation has shown that the human-like fingers are suitable for maintaining twist-rolling and slide-twist-rolling but not for pure rolling. A finger with roll joint at its fingertip link, which is different from human fingers, proved to be better for pure rolling motion because it can accommodate sideway motion of the object. Several kinds of useful finger joint configurations suited for manipulating objects by fingersurface are proposed.
We have developed a balance compansating method for dynamic motions of a full-body humanoid standing on one leg. This method could compensate tri-axial moments in any motions, using all joints of body in real-time. In this method, costraint conditions for stable motion were calculated by using physical 3D model of the robot, and the output motion was determined by solving an optimization problem so that the output should be closest to the input under the constraints. The proposed method had high generality because of independence of joints arrangement of robot. This paper describes the algorithm of the method and an experiments applied on the 16D. O. F. humanoid in kick motion.
The purpose of this paper is to design nonlinear control systems for tendon-driven robotic mechanical systems. Tendon-driven mechanisms are appropriate especially for force-controlled robotic arms as well as robotic hands, because they allow to locate actuators away in the robot bodies and make them small in size and light in weight. In the dynamic aspect of the mechanisms, one of the most important features is the joint stiffness adjustability using the redundancy of actuators and nonlinear elasticity of tendons. A conventional control system for a tendon-driven robotic mechanism is a double looped control system, which consists of an inner loop for tendon force control and an outer loop to generate the desired tendon forces for position and stiffness control. In this control system, the inner loop has to converge to much faster than the outer one does. Thus the inner loop feedback gains become relatively large and the system tends to be vibrative. In this paper, a nonlinear control system that can control the joint angles and the joint stiffness separately is proposed using an exact input-output linearization approach. A necessary and sufficient condition under which we can design such a contoller is given. Classes of tendon-driven robotic mechanisms which satisfy the condition are also investigated. Finally some numerical results will be given to show the efficiency.
A novel control method for the indirect positioning operation of soft deformable objects will be proposed. In some operations, multiple points on a deformable object should be positioned to the desired ones simultaneously. In addition, there exist some operations where these positioned points cannot operate directly and we have to perform these operations indirectly by controlling other points. We call these operations indirect simultaneous positioning operations. First, a simplified physical model of deformable textile fabrics is developed for their positioning operations. Second, the indirect positioning operations are formulated using the proposed model. Based on the linearlized model of the deformable objects, we will propose a novel control method for the indirect positioning operations with visual sensors. In this method, positioned points can be guided to the desired locations by controlling the positions of operation points. It should be noted that our control method works well even if the exact physical properties of fabrics cannot be given. Experimental results will show the validity of the proposed method.
Ultimate goal of this study is making up CIM of painting process for large scale steel product. For the purpose of it, the study aims at obtaining the solution to the problem of trajectory accuracy caused by singularity, especially on the trajectory in which tool orientation is changing along the path. We have developed two solutions. One solution is simple way of accuracy evaluation of that kind of trajectory. This way of accuracy evaluation consists of two steps. First step is searching the necessary condition of singularity. Second step is estimating influence of singularity on the necessary condition. We can modify no guarantee trajectory which we have found out by this way. The other solution is trajectory control without wrist singularity, using the change of tool orientation. In this trajectory control, angle of No. 4 axis is fixed and change of tool orientation toward the direction of painting path is permitted. The experimental results show that we can obtain high speed painting without deterioration of accuracy by using this new trajectory.
We propose a novel gait for quadruped walking robots which can maximize the static stability margin. It has already been mentioned in our former study that the normalized energy stability margin, which we call the NE stability margin for short, is the most practical criterion to evaluate the stability of walking robots on rough terrains and that the SNE contour, which connects all the points on the ground possessing the same NE stability margin, helps us to estimate the most stable direction for the center of gravity to move. Then in the first part of this paper, we mention the characteristics of the new static gait, called the intermittent crawl gait, which enables quadruped walking robots to keep larger NE stability margin than the crawl gait. Second, we derive the standard foot trajectory which can maximize both the stability and the velocity to move. Third, we also make it clear how to change the walking robot's directions while maintaining sufficient NE stability margins. And in the last part, we'll show you the validity of these theories through the experiments with our actual model, TITAN VII.