Methodology of R & D of micro-nano-robots (MNR) based on modelling of dissipative equations of MNR-motion is described. Control of micro systems in which the dominating distrurbances are thermal and quantum noises is considered. Quantum interaction can be considered only between the object and the system of observation. Control through a physical fields, micromanipulators, and so on can have a continuos classical significance. By quantum interaction the errors of microcontrol can be still low of natural thermal fiuctuational levels. The problem of the physical limit accuracy of micro control on the basis of concrete, but rather general, examples is discussed.
A small mobile machine was built which has two legs made of bimorph piezoelectric actuators. It proceeds by vibrating one leg and making selective use of the friction between the legs and the floor. Simulations were made to investigate how the machine can proceed. The movement of the machine was measured using an accelerometer and compared with the simulation results. The comparison showed that the viscous friction model explains the movement better than the Coulomb friction model. Four-legged machines were also built and tested for multidimensional mobility.
This paper describes a development of vision-based mobile robot, called Hyper Scooter. A user can ride on Hyper Scooter, and instruct it to act along his/her intention on-the-spot in a natural way. Hyper Scooter is battery powered, carries a single driver, and has a system that integrates the correlation-based high speed tracking vision system and the user-interface to interact with the robot through visual information. The features of our system are: (1) To make the mobile robot personal, we designed our system on the premise that a user rides on it. (2) the user requires no special knowledge. (3) Instruction and the execution are immediate supported by our high speed vision system. The experimental result will show the feasibility of our system. The user can instruct Hyper Scooter to achieve autonomous complex trail replication instantly without explicit environmental model.
We propose a linear approximation method of the inverser kinematics of a manipulator by using a binocular visual space. When we use a hand-eye system which has a similar structure as a human being, we can approximate the transformation from a binocular visual space to a joint space of the manipulator as a linear function. We show that the most suitable structure of a stereo camera and a manipulator for linear approximation of the inverser kinematics is similar to human being.
If a robot arm which has similar characteristics to human arm is realized, it will open new possibilities in the application of robots. One typical example of such characteristics is viscous-elasticity adjustability. A human arm has muscles far more than the D.O.F. and can adjust the joint stiffness at will. This property allows us to perform various tasks successfully. In this paper, to design a robot which can work with human beings cooperatively, comfortably and safely in the same area, a tendon-driven manipulator which has the following properties is developed. 1. It has 7 D.O.F. and the linkage configuration as well as the dimensions are similar to a human arm. 2. It has ten tendons together with no-linear spring tensioning devices (NST) which make it possible to adjust the mechanical joint stiffness over a very wide range. Using these properties, the robot can match the impedance with the environment including a human coworker and do various tasks effectively. The mechanism and control problems of the 7 D.O.F. tendon-driven robotic arm with non-linear spring tensioning devices are discussed. Several experimental results are also given to show their efficiency.
This paper describes a practical active vision system called Sheila that equips two differentiated visual fields. Sheila detects insulator images with a peripheral vision and shifts its fixation point sequentially to each object for examining them precisely. Insulator images in windows expose characteristic peaks in the local frequency domain. Such patterns are detected by two sorts of cell-detectors, the periodicity cells and the directionality cells. The output signals of the cells are transmitted to a neural network that stimulates where insulators exist in the image. Experiment results for a transformer substation showed a practical potential of Sheila.
To realize desired motions, feedforward inputs play an important role. One way to form the feedforward input is “Learning Control” based on actual iterative operations. Disadvantages of such learning schemes are summarized as follows. (1) For each desired trajectory we need several trials. It may take long time. (2) To memorize many ideal input patterns a large amount of memory space in a digital computer is necessary. To overcome the above difficulties, we propose interpolation methods in this paper. In the proposed methods, several ideal input patterns are obtained through the learning control and stored in the memory at first. Next, by interpolating acquired ideal input patterns, another ideal input pattern is formed without iterative operation of the learning control. We explain one method which is applicable to the case that the spatial trajectories of each desired motion are the same and the time trajectories are different. For the case that the spatial trajectories are different but has a linear relation in a task-oriented coordinates, the other interpolation method is mentioned in order to obtain approximate in-put patterns. In experiments that a robot moves along a conveyor with different speeds or different spatial trajectories, the effectiveness of the proposed methods is demonstrated.
We have developed a force-controlled robot system for finishing tasks such as grinding, chamfering and polishing tasks in machining plants. Some force-controlled robot systems have been developed already. However, these robot systems are not used widely because these robot systems are difficult to use. In particular, it is too difficult for robot users to select or decide the many force control parameters, for example desired force, force control direction, force detecting direction, force control gains and so on, in the conventional robot languages. To solve the problem, we have analyzed finishing tasks and have found that the force control parameters closely depend on the tasks. Thus, we have developed a task-directed robot language. The force control parameters are assigned by designating only three key parameters-a task name (TASK), a tool code (TCODE) and a tool center point (TCP) - and one force control motion command (GMOVE/CMOVE) in the task-directed robot language. This robot system has been applied to chamfering tasks for the edges between the cast surface and the machined surface of machine parts. In this way the effectiveness of the task-directed robot language has been proved.
We propose an efficient and simple method for finding a collision-free path and orientation for a rigid robot in a 3-D environment for Unmanned Aerial Vehicles (UAVs) . The method uses an octree for representing every object (robot and static/dynamic obstacles) in the environment without any distinction of the movability of objects; therefore, all objects are dealt with in the same way. The path of a robot from its starting position to the given goal with arbitrary motion (i.e., translation and rotation) in a 3-D environment is efficiently searched for in successive adjoining regions (octree white nodes) by using the potential field generated from each black node of the octree. The algorithm is simple, so it can easily be accelerated by using parallelization techniques. Experimental results obtained under several conditions are reported, and finally, the efficiency of the proposed method against the complexity of the environment are discussed.
An implementation of a biped robot which is capable of dynamic walking by a simple nonlinear control algorithm is presented. Four D.C. servo motors actuate the knee and ankle joints of the legs of the robot. The biped is constrained to the sagital plane, and the motion generation is reduced to a problem of controlling the position and velocity of the robot's center of gravity. They are controlled by a nonlinear feedback controller, based on a simple feedback linearization method. Several design issues including mechanical structure, leg actuation, and control system of the robot are discussed. Experimental results demonstrate the effectiveness of the algorithm.
This paper presents a control method of pushing an object with point contact to control its position and orientation along a given trajectory. First, using the given distribution of frictional forces between the object and a support surface, we find out a particular point, named pseudo center, on which the motion of the pushed object can be approximated by the motion of wheeled mobile robot on its center. Then, a control rule of the pushing operation is derived by applying a tracking control rule for a non-holonomic mobile robot at the pseudo center. This method makes a robot possible to perform the tracking control in the pushing operation. A simulation result shows the effectiveness of the proposed method. Finally, we give an approach to use a mobile manipulator for realizing the pushing operation. Experimental verification on the proposed method is performed and its result is described.
We consider that the stability of biped walking motion depends on the contact forces interacting on the foot in order to walk well. In this point of view, this report suggests a method based on the contact forces for design of joint trajectories of biped walking robot. The relation between the contact forces obtained using Newton-Euler method and joint motions are derived. After that, the joint trajectories are decided by substituting the contact forces based on the ZMP criterion as design parameters into the relation. The most important point of this method is that the derived equations between the contact forces and the joint motions are able to be applied to the both cases of one leg supporting phase and two legs supporting phase by changing the restraining condition to the swinging leg.
This paper presents a method of manipulating contact states for robotic assembly based on a multi-agent system. Each of agents has a model of local contact states between corners of assembly objects. The model is represented by 3-DOF contact state transitions network and has motion constraints in each contact states. Based on this model, individually, each of agents senses contact state and suggests robot motion for next contact state and motion constraints. Suggestions are integrated and the robot motion is determined to satisfy motion constrains. This method is applied in simulations about 4 kinds of insertion task; single-peg insertion, dual-peg insertion, L-shaped-peg insertion, and T-shaped-peg insertion. The results are presented to show the effectiveness of this method.
We developed a 6-axis force/torque sensor which can be mounted in the fingertip of a multifingered hand. It is known that contact information between a fingertip and a grasped object, that is, contact point location, a contact force, and a contact moment directed to the normal of the contact surface can be calculated using the output data of a 6-axis force/torque sensor. The output data of a 6-axis force/torque sensor is, however, usually disturbed by a noise. For precise contact information sensing, we need to evaluate the error caused by the noise in the calculation for the contact information. This paper analyzes the error and shows that the maximum singular value of the Jacobian which represents the differential effect between the calculated contact point and the strain-gage output can be used to evaluate the error. Based on this error analysis, we show a sensing technique which provides precise contact information. We also show which should be used to sense a contact point between the algorithms for hard and soft contacts proposed by Salisbury and Bicchi, respectively.
Several position identification methods have been used for mobile robots. Dead reckoning is a popular method, but is not reliable for long distances or uneven surfaces because of variations in wheel diameter and slippage. The landmark method, which estimates current position relative to landmarks, cannot be used in an uncharted environment. We have proposed a new method called “Cooperative Positioning System (CPS) .” For CPS, we divide the robots into two groups, A and B. One group, A, remains stationary and acts as a landmark while group B moves. Group B then stops and acts as a landmark for group A. This “dance” is repeated until the target position is reached. CPS has a far lower accumulation of positioning error than dead reckoning, and can work in three-dimensions which is not possible with dead reckoning. Also, CPS has inherent landmarks and therefore works in uncharted environments. In this paper, we discuss CPS with redundancy that consists of a number of moving robots and show a new computational theory that is to integrate positional information obtained from various combinations of multiple robots. Experimental results with a second prototype CPS machine model (CPS-II) give a positioning accuracy of 0.32% for position and 0.43 [degree] for attitude without redundancy, and 0.12% for position and 0.32 [degree] for attitude with redundancy.
In this paper, we consider a position control of a rigid ball which is held between two parallel plates. The kinematic constraint of the system is rolling constraint which is a nonholonomic constraint. Thus, the system becomes drift-less nonlinear system. As Brockett showed, the equilibrium of such systems can not be stabilized with any continuous static state feedbacks even though the system is controllable in the sense of nonlinear. We applied a control strategy which is transforming this system into time-state control form by coordinate transformation and input transformation. By the simulation, we will show that we can stabilize both ball and the plate at the goal point.