This paper describes a method of tracking a moving object in a 3-D space in a cluttered background by integrating optical flow and edges. The flow field is segmented into regions so that the motion of each region may be approximated by a plane motion. An object region is determined as a set of the plane motion regions with similar 3-D motions (translation and rotation) . The object is tracked by merging and separating the plane motion regions at each frame. In order to estimate the plane motion correctly, the contour of the plane motion region must be obtained precisely. Because the motion boundary of the region is not precise, edges near the motion boundary are used to represent the part of the contour. In order to represent the whole contour even in a cluttered background, the edges are accumulated over frames. If the motion boundary at the overlapping part of the predicted regions can not be extracted, the contour is determined by the edges on the predicted contour. Moving objects are tracked even if the appearances of the objects change drastically. Experimental results for synthetic and real images show the effectiveness of the method.
In this paper, we propose a planning algorithm for sweeping tasks in environments with unforeseen obstacles. Sweeping means a motion that a robot covers a 2-dimensional area. When a mobile robot finds an unforeseen obstacle, it has to not only avoid collision but also sweep the whole area including where the obstacle is located. In the proposed algorithm, a robot determines local area where sweeping path should be arranged based on it's limited sensing ability. The algorithm avoids heavy calculation cost by limiting the planning area. A robot minimizes length of path when it connects path inside and outside the local area. An autonomous mobile robot can omit running time by the algorithm. We verify the efficiency of the algorithm and discuss the trade-off between running time and sensing/calculation time by simulations and experiments.
We propose an experimental system of a space robot in the microgravity environment at the Japan Microgravity Center (JAMIC) . At the JAMIC, free-falling of the drop capsule with experimental setups on board generates the microgravity environment, as accurate as 10-5 [G], for about 10 seconds. We developed an experimental setup for visual feedback control of a space robot capturing a target. To measure the position of the target and the motion of the robot, two CCD cameras are used and their images are processed by the tracking vision (TRV) . For the precise measurement, we make use of not only dual images from the cameras but apriori knowledgement of geometry of marks on the robot. After the introduction of the experimental system, we establish two computational methods to identify the 3D position and orientation of the robot base. The experiments and their results follow to evaluate the proposed methods.
Automatic garbage collection site for removing garbage from AMR and putting the garbage into garbage depot in good order has been developed. This system is composed of five subsystems: (1) A servo-controlled orthogonal type XYZ-crane; (2) A garbage bag gripper which has two rollers at the tip of its finger; (3) A sensor system to detect distribution of garbage bags on floor and on AMR; The irregularities of the garbage-bag can be detected as distance image by scanning ultrasonic range sensor; (4) A communication device using IR for communicating with AMR; (5) Computer system for control whole systems. Moreover an algorithm for determining suitable position to taking and putting garbage bag from distance image has been developed. The experiments using all of the above-mentioned subsystems have shown practicality of this system.
We discuss mobile robot navigation under linguistic instructions of a route. The instructions given to the robot describe the route using landmarks such as intersections, for example, “turn right at the 2nd intersection, and ….”However, it is difficult to give the robot landmark models so that it recognizes each landmark properly as a human does. We have already proposed an algorithm for mobile robot navigation, which verifies the result of landmark recognition considering the positional relationship among landmarks. In this paper, we describe the motion planning to acquire additional information effectively for navigation of the mobile robot equipped with a range sensor based on the algorithm, aiming at realizing the real robot navigation using the linguistic instructions. The environment is described by a graph based on skeletons of its free space. The motion of the robot is first roughly planned with the graph-based global map and then precisely planned with a geometric local sensory information. In experimental results using a real robot, the robot reached the goal specified by the instructions despite of errors in sensing and motion.
In this paper, we propose BERO (Biomimetic Tongue Robot), which is a device to try to reproduce the motion of the tongue during speech production by mimicking the human tongue. One of the features of BERO is abundant in softness because it is composed of soft actuators which have no hard parts. This feature corresponds to the fact that the tongue has no skeleton and is an advantage in realizing the soft motion of the tongue. As a prototype of BERO, we manufactured in trial a two-dimensional model in the median sagittal plane, and we carry out reproduction of the tongue shapes with BERO. As a result, we could say that BERO is suitable to reproduce the tongue shapes.
This paper proposes a general infrastructure for robot navigation in an outdoor environment. The infrastructure, called a Distributed Vision System, consists of vision agents connected with a computer network, monitors the environment, maintains the environment models, and provides various information for robots by organizing communication between the vision agents. As the first step of our research, we have developed a prototype of the distributed vision system for navigating mobile robots. The experimental results show that the system has an ability to navigate the robots in a complex environment.
A new method of observing the state of tasks is presented. When a robot rubs an object with an end effector, it generates a frictional force which contains much information on the state of tasks. Force measured with a wrist force sensor is analyzed. The friction is classified into viscous friction or Coulomb friction, according to the relation between the friction force and velocity. The state of contact can be determined by estimating the type of friction: if the friction is viscous, a surface contact is supposed, while Coulomb suggests a point contact. This function can be useful in the reliable performance of assembly tasks. In addition, by gauging the magnitude of the frictional force, we can detect the shortage of paint held in a brush in a painting task. This function can also be applied to other tasks such as polishing a floor with wax or wiping off an oil stain. In the experiment, surface contact between two planes is successfully achieved, and a board is uniformly painted, demonstrating the effectiveness of the proposed method.
An intelligent robot which distinguishes several different environments and completes the task demanded in each environment is described. Through the interaction with the surrounding environment, the robot recognizes not only current environment but also its own position successfully, whereas the robot has no map representing the world in which it is going to act. For the great work, the robot finds and uses the appropriate affordance lurking in the surrounding environment. The brain of the robot is defined by a Finite State Machine (FSM) and is evolved by using a Genetic Algorithm (GA) . In order to create the target FSM effectively, new genetic operations based on the Lamarckism are also proposed. Computational experiments show that the proposed genetic operations are superior to the conventional ones in both the quality of FSM and the velocity of evolution. Furthermore, observing the behavior of the intelligent robot, affordance is specified in each environment concretely.
This paper describes the design of control systems for multiple specifications and its application to an unstable wheeled servo system. It is difficult to theoretically obtain a controller to satisfy multiple specifications simultaneously such as the step response over-shoot and the settling time etc., so a numerical convex optimization method given by Boyd  can be employed to obtain such controller. Since the coprime factorization form transfer function is used for the evaluated function for optimization, pole assignment of the closed loop system can be considered ex-plicitly and it is easy to design the practicable controller for the motion control. In this paper, we apply this convex optimization method to controller design for an unstable wheeled servo system which has multiple specifications on the suppression of all outputs and the wheel transit response performance. Experimental results demonstrate the effectiveness of this convex optimization method.
This paper proposes a calculation method of inverse dynamics for a crane-type manipulator with multi-degrees of freedom. Firstly, a new crane-type parallel wire mechanism which has three trolleys and three wires is proposed for handling heavy objects. This handling mechanism enables not only three dimensional positioning but also orientating of objects, unlike typical crane mechanism. A dynamical model using wire force vector is presented. Using the dynamical model with wire force vector, inverse dynamics problem is solved as a solution of linear equation in terms of the wire force vector. The trajectory of end-effector can be transformed into the trajectory of wire length and trolley position by the calculation method. The method is used for controlling position and orientation of end-effector's trajectory. Two dimensional experimental crane-type manipulator is also developed to verify the effectiveness of the control method.
This proposed bin-picking method removes the inconsistency in the information transferred from object detection to path planning of manipulator by using environmental information. The inconsistency is that the object detection only provides the position and orientation of a few detected objects while the path planning needs the environmental information including all objects in the work field. To solve this problem, we propose to include possible objects representing undetected objects in the environmental information. A Hough transform, which records the model-to-image matches in a parameter space, is used to estimate the pose. Both detected and possible objects are extracted from the parameter space. To reduce the number of detection errors and the object-detection time, a pair of object features that reduces the number of invalid votes in the parameter space is used for the Hough transform. Simulated path planning and an experiment on casting objects using a range finder and a manipulator showed that the environmental information including the possible objects improves the ability to plan a safe path for the manipulator. Simulated object detection and an experiment showed that using a pair of features makes the process faster and reduces the number of invalid votes.