In this paper, we discuss a method which reconstructs a global map of the indoor environment from omni-directional view obtained at several robot locations. The omni-directional view obtained by our method is a kind of environment representation, which has both precise angular information and range estimation from the observation point. The processes to reconstruct the global map are as follows: (1) Obtaining Robot-Centered Representations (omni-directional views) at several robot locations for finding free areas in which the robot can move. (2) Obtaining local maps, represanted by Path-Centered coordinates, from omni-directional views taken at two neighbor robot locations. The relations between two neighbor robot locations are determined by active observations using feedbacks of the omni-directional views while the robot moves. (3) Building a global map (Environtment-Centered Representation) by fusing those Path-Centered Representations.
The characteristics and the applications of the articulated body mobile robot, “Koryu (KR) ”, a mobile robot with a new style of articulated body is identified, and the construction of newly developed“Koryu-II (KR-II) ”is discussed. KR-II takes the form of linked seven articulations and its total length is 3.3m, and its total weight is 320kg. KR-II introduces new mechanisms and control systems such as, 1) easily detachable unified segments, 2) self contained driving system with wireless control, 3) traction wheels attached on one side of the segment, 4) the long stroke vertical prismatic joint to cope with a steep inclination, 5) robot hand driven by the coordination control of the body articulation, 6) force sensor based on photodetecting technology, 7) an analog bus system to simplify the electrical installation. Finally, this paper describes fundamental results of the experiment of KR-II and shows the validity to move and operate in an unstructured environment.
Pushing and tumbling operations (for example tipping a heavy object up on end by pushing it on one side) have good potential for improving the efficiency of robotic manipulation and for expanding the domains where the robot may be used, because robotic manipulators exploiting these operations need not grasp an object firmly and need not support its whole weight. This paper focuses on some aspects of robotic tumbling operations. First, analysis of three kinds of tumbling operations, the quasi-static stand-up operation, the tumble-over operation and the lay-down operation, will be-described. Second, a control system architecture which is suitable for these operations will be described. In this control system, each manipulator motion function and each sensor function (for example a touch sensor monitor) are programmed as primitive tasks which run in parallel. We can easily achieve simultaneous motion of several manipulators and sensor interactive motion (such as the guarded move) by controlling the running state of primitive tasks. Third, these three kinds of tumbling operation will be illustrated with experiments involving tumbling a block on a table using multi-jointed fingers.
This paper proposes a method for controlling a space manipulator with an end-effector for grasping a satellite floating in space. The previously published methods have two problems, (1) they have not dealt with the case where a satellite-manipulator-target system has translational and rotational momenta, and (2) they need much calculation when an end-effector linearly approaches a target's grasping point. We applied a sliding mode control method, and the first problem was resolved by considering momenta as external distubrances and the second problem was dealt by considering the exponential convergence of errors. And we proved a practical stability of the control which means that relative errors can be suppressed within a planned sphere in a finite time (for a case where angular velocities of the manipulator arms were controlled) . This control scheme uses observed data obtained on-board the spacecraft. Finally the practicality of the proposed method is investigated and the efliiectiveness of that is demonstrated by computer simulations.
A position/stiffness-based manipulation control for a multifingered robot hand is proposed. In the proposed method, an object is manipulated by controlling each fingertip trajectory according to the desired trajectory for a manipulated object. Furthermore, the stiffness/damping control is also introduced at each fingertip to achieve the stability and the desired dynamics of the manipulation. Compared with the conventional force-based manipulation, the proposed control method is rather simple in algorithm, and without changing control modes, also enables to make finger motion task plans in constraint and unconstraint environments, since it is based on the position control. The fundamental formulation of proposed control method is described precisely and the dynamics of manipulation is analytically investigated. These considerations are experimentally confirmed on two-dimensional manipulation using a two-fingered robot hand with the stiffness control capability.
In this paper, a methodology using the Lyapunov direct method is proposed to analyze the stability of a multi-link manipulator system which is positioned on a flexible wall with collision phenomenon. The stability and response of the system are examined by parameter studies of numerical simulation. As the demands for rapid motion of robotics has been increasing in the field of industry to achieve higher efficiency, collision has become a problem to be solved, because every task includes contact when manipulator acts on an object. However, few researches have been undertaken to overcome this problem. In this paper, we employ a Hertz-type model with energy loss parameter to express the impact force between the manipulator and the wall. Using this model, we proved the stabilization effect of collision using the Lyapunov method. The effect is confirmed by the simulation. As a result, stable positioning of the manipulator on the flexible wall is assured and use of collision is sometimes effective to control manipulator to do a task with rapid contact to environment.
This paper proposes a new teleoperation system where man and robot cooperatively execute tasks. First, in the new system, the roles of man and robot in the generation of task procedures and task execution change dynamically. That is, while in the conventional supervisory system the role of man is limited to the generation of task procedures and that of the robot to task execution, in this new system man can participate in task execution and the robot can participate in the generation of task procedures. Second, in the new system, the generation of task procedures and task execution are interspersed. That is, while in the supervisory system a task proceeds as batch process, in the newly proposed cooperative system it proceeds interactively. MEISTER (Model Enhanced Inteligent and Skilful Teleoperational Robot) is an example of a cooperative task execution system which integrates a teleoperation. oriented knowledge base, a cooperative maneuvering system, and a teleoperated-motion understanding system. A chemical experiment task demonstrates the effectiveness of man-robot cooperation.