A new path planning method for mobile robots is proposed. When a mobile robot navigates, an error in estimated position by dead-reckoning of the robot increases with the traveling distance. Generally, environment recognizing functions of a robot are limited, and hence the positioning error gives a risk of collision with objects in the envi-ronment. To reduce this error, the robot should detect the natural or artificial landmark in the environment by its own sensors, and adjust it's estimated position. It is necessary for a mobile robot navigation to adjust it's position in suitable intervals. However, there are not so many landmarks to adjust robot's position in real environment. Hence planning the path with good landmarks is necessary for mobile robot navigation to a goal. On the other hand, it is nonsense to take huge cost to detect such landmarks. In this case, trade-off between sensing costs and good landmarks is important. In this paper, the cost function for the mobile robot's path which minimizes the risk of collision related with sensing cost, and the algorithm to find the optimal path and sensing points are proposed. The authors implemented the proposed algorithm on real robot system, and made several experiments to verify this algorithm.
From fingertip grasp to power grasp, a multifingered robot hand takes various configurations. Most of the previous statics theories about grasping and manipulation were established under the assumption that the contact points are on the fingertips. In this article, under the generalized contact assumption that allows multiple contacts and face contacts anywhere on hand surface, we establish the statics theory for grasping and manipulation that is applicable to all the grasping configuration, using the theory of polyhedral convex set. The relation between the external force acting on the object and joint torques is made clear, and the foundation of the statics theory is formed. The algorithms for computing admissible external force space and critical external force space are proposed. Finally, the validity of our approach is illustrated by a numerical example and experiments
Manipulators which can move by itself are called“Mobile Manipulators”.In this paper we show effects of inverse dynamics compensation for power-wheeled-steering (PWS) Mobile Manipulator. The Newton-Euler method is used for the compensation calculation. Mobile Manipulator has a different characteristic from a floor-fixed manipulator, in that it moves according to nonholonomic constraints. The constraints integrate travelling velocity error and turning angular velocity error of the Mobile Manipulator on a floor, then the hand position and orientation of the mounted manipulator include the accumulated errors. The errors do not disappear even after the dynamic vibration of Mobile Manipulator has settled. Therefore, to reduce the integrating errors, the travelling errors of the mobile manipulator should be reduced. We show the effects of the proposed compensation confirmed by real experiments with our experimental Mobile Manipulator system.
We present a mobile robot that can actively localize a sound source behind an object, approach it by avoiding obstacles on its path, and discriminate whether the source is a human or not. The robot has a unit of three microphones for ears and a CCD camera for an eye. The direction is estimated based on time differences among signal onsets received by the microphones, located at each vertex of an equilateral triangle. In a complex scene where a source is not visible from the robot, it estimates tentative goal direction based on diffracted sound at the edge of an object occluding a sound source. Repeating source localization at different positions, the robot find the source and discriminate it by block matching an image captured by the camera mounted on the robot with template image memorized in advance. We have implemented these methods and obtained satisfactory results demonstrating the effectiveness of the methods. We also describe a method for estimating sound source distance based on observations at different positions.
We have already reported the fundamental mechanism for an in-pipe inspection robot which can move in small radius (50 [mm] ) pipeline. The robot could go through elbows and T-joint, but only in horizontal pipelines. In this report, we describe a new in-pipe inspection robot that is extended to go through both horizontal and vertical pipelines with computer control. The robot has 2 steering units at both end, and each steering unit has 2 D.O.F to change the direction of the robot. In order to be controlled automatically, the robot should know the shape of pipeline, the status of the robot and locomotion distance, the robot has various sensors such as tactile sensors, an encoder and potentiometer. Using the information from these sensors, the robot can move along not only straight pipeline but also elbows or T-joint automatically.
Previous work in vision-based mobile robot has been lacking models of the route which can be utilized for recognition of (1) position identification, (2) steering direction determination, and (3) obstacle detection, simultaneously. In this paper, we propose a new visual representation of the route, the“View-Sequenced Route Representation (VSRR) .”A VSRR is a non-metrical model of the route, which contains a sequence of front view images along a route memorized in the recording run. In the autonomous navigation, the three types of recognition described above are achieved in real-time by matching between the current view image and the memorized view sequence using correlation technique. We also developed an easy procedure for acquiring VSRRs, and a quick control procedure using VSRRs. The VSRR is especially useful for representing routes in the corridors. Experimental results of autonomous navigation using a two-wheeled robot in a real corridor are presented.
In a previous paper, the present authors proposed to constructing a Cartesian nonholonomic robot using two balls and two actuators. In order to find whether the robot would be physically realizable and whether good control results could be expected, a prototype is constructed and control experiments are conducted. A new sensor which can detect the position and orientation of the robot is also developed, using two mouses used in computers. Both simulations and control experiments are performed, yielding very similar and acceptable control results, showing the correctness of the kinematic model, the efficacy of the control algorithms, and the physical realizability of the proposed nonholonomic robots.
An on-line motion planning algorithm is proposed for an autonomous mobile robot which allow it to approach its goal while avoiding multiple moving obstacles such as people and other mobile robots. The future trajectories of the obstacles depend on the future motion of the robot, since each obstacle also avoids other obstacles, including the robot. In other words, each obstacle has its own‘motion principle’which determines its motion from the location of its goal and the motion of other obstacles. In this study, the robot infers the principles of all obstacles in its sensing area from their trajectories observed in real time. This means it can predict their future trajectories, which are influenced by its future motion, using these principles. For a given motion principle and predicting the future trajectories of the obstacles, the robot evaluates the total time required to reach its goal and the degree of danger involved in avoiding obstacles. The robot tries this prediction and evaluation for several principles and then selects the optimum principle as the current one. Furthermore, it changes its principle at regular intervals in the above mentioned way. The difference between the real and inferred principles of the obstacle is reflected in the error of the predicted trajectory.
A video-rate stereo machine has been developed at Carnegie Mellon University with the capability of generating a dense range map, aligned with an intensity image, at the video rate. The target performance of the CMU videorate stereo machine is: 1) multi image input of 6 cameras; 2) high throughput of 30 million point×disparity measurment per second; 3) high frame rate of 30 [frame/sec] ; 4) a dense depth map of 240×256; 5) disparity search range of up to 60 [pixel] ; 6) high precision of up to 8 [bits] (with interpolation) ; 7) uncertainty estimation available for each pixel. The stereo machine is based on a multi-baseline stereo method. The basic theory requires some extensions to allow for parallel, low-cost, high-speed machine implementation. The three major ones are: 1) the use of small integers for image data representation: 2) the use of absolute values instead of squares in the SSD computation (SAD instead of SSD) ; and 3) camera geometry compensation capability for precise camera alignment. The stereo machine successfully generates disparity map of 200×200 with 23 steps of disparity search range in the video rate.
Active visionfor constructing a three-dimensional model of the environment from images requires a robot to control its own motion. Since noise exists in images, the information they provide is not always complete. If the motion is too small, images can provide only2-D informationwithout any depth clues. As the motion increases, we obtain incomplete 3-D information, which we call2.5-D information. After the motion becomes sufficiently large, we obtain complete3-D information. We give a geometric interpretation to these transitions by viewing the problem asmodel fittingof a manifold in an abstract data space. We also derive a decision rule based on thegeometric AIC. This rule can be used as a means ofself-evaluationfor testing if the robot motion is sufficient for structure-form-motion analysiswithout involvinganyempirically adjustable thresholds. To demonstrate this, we give examples using synthetic and real-image data.
A hand/eye system for handling flexible materials is under development. We consider that much more attention should be paid to a cooperative sensing of touch and vision when flexible or limp objects such as fabrics are handled. This paper presents basic ideas about handling flexible materials and a strategy of sensor-based manipulation for unfolding a fabric piece in a case of primitive fabric handling movements. A result of robotic handling for unfolding a folded fabric was shown. A fabric is easy to change its shape in 3 dimensional space. Therefore a partial pattern matching of fabrics using information of those outline was done in order to guess where was a folded corner point. Vision and tactile sensing was used for picking up a folded part of the fabric.
A prototype two-fingered micro hand is designed and is applied to the actual micro tasks. First, we discuss a concept of a two-fingered micro hand based on human usage of a pair of chopsticks. The two-fingered micro hand should be designed by considering the effective work space, or the common work space of two fingers, as well as the simplicity of its cooperation control for each finger. Our proposed prototype has two 6 DOF parallel mechanisms in series in order to realize similar motion in our chopsticks usage. We applied the prototype to practical micro manipulation tasks, and we succeeded in picking up a glass ball having size of 2 [μm] and placing it in arbitrary position under a microscope by human teleoperation.
In this paper, a trajectory generator for a visual servoing system with two cameras is proposed to make the system accomplish obstacle avoidance tasks in unknown environments. Using the estimated epipolar constraint between two cameras, the proposed scheme can generate trajectories for the visual servoing system on the 2D image planes by a simple obstacle avoidance method without reconstructing 3D geometry. The proposed scheme is based on the idea, “as long as one of projected trajectories does not intersect with projected obstacles, the trajectory in 3D space can avoid the obstacles.”An experimental result is shown to demonstrate the validity of a combination of the proposed trajectory generator and the visual servoing control scheme.
This paper discusses a methodology to allow a robot to execute the task autonomously with high confidence and its method of the systematization. Taking up the mobility of a robot without collision in the dynamic environment as an example, a concept of the feedback control for the execution of the task is introduced. The key idea is to incorporate the real-time recognition and motion planning functions into the task control loop. For real-time recognition of the environment, mark-based vision is proposed to be systematized. In additoin, algorithm to do a global motion planning in the environment where movable obstacles exist is proposed to be implemented. By integrating the real-time recognition and motion planning, we succeed in forming a control loop for the task. It is verified by experiments that mobility of a robot controled by the proposed method is done without failure except that a human handler intentionally disturbed the movement of the robot.
In this paper, we propose a nonlinear adaptive control scheme for legged robot called Emu, which we are going to develop. We consider a control scheme which makes Emu stand upright. For the first step, we regard a 2 link inverted pendulum with unknown physical parameters as a simple model of Emu. We develop the control scheme in the following way. Firstly, we treat the parameters of the pendulum as known parameters. Then we design a nonlinear control scheme based on an exact linearization method. Next, we design a parameter estimation law based on the least square method. Finally, we combine the nonlinear control scheme with the parameter estimation law, and we yield the nonlinear adaptive control scheme. Furthermore we show that this scheme is effective by simulations and experiment.