The objective of this work is to develop a new robot intelligence for human-machine communication and environment-machine interaction based on the self-preservation function which should be involved in human mind. In this paper, the system chart expressing the human brain information processing, and the development of an autonomous mobile robot “WAMOEBA-1R” (Waseda Artificial Mind On Emotion BAse) are described. The concept of the WAMOEBA-1R design is that robots should have Self-Preservation Evaluation Function as an emotion function. Further more, the method to evaluate the whole system are described from the viewpoint of the animal psychology. As a result of the experiments, WAMOEBA-1R showed specific reactions with color emotion appearances to some simulations. WAMOEBA-1R had the sense of values about colors and sounds based on Self-Preservation, and achieved new type human-machine communications by expressing an emotion color.
Human mastication is performed by coordinated motions of several muscles attached to the jaws. To clarify the functions of these muscles, we have developed a jaw movement simulator (JSN/S1). The simulator consists of a 2DOF mechanism and four muscle actuators, capable of realizing jaw movements in a sagittal plane. The actuator is a cable-tendon driven by DC-servo motor, which is controlled under a compliance control scheme to obtain viscoelastic characteristics of the muscle. In order to simulate life-like clenching motion, we attempted to control the position and force at the incisal point, by incorporating position/force sensors to that point, and employing a neural-network based learning control scheme. Consequently, the trajectory and force at the incisal point were successfully converged into desired value through learning process.
This paper describes an efficient computation algorithm of the marginal external force space of the power grasp. A general computational algorithm of the space is provided for the 3-dimensional grasp with multiple contacts including defective contacts by a multi-fingered robot hand. Evaluation of the marginal external force space requires the computation of convex hull in a 6-dimensional space, which is known to have O(n⌊(d+1)/2⌋) computational complexity. For more realistic evaluation, we propse to compute the 3-dimensional section of the marginal external force space, and establish its computational scheme. Numerical examples, for 3-dimensional grasp by a 3-fingered robot hand, illustrate the efficiency and usefulness of the proposed algorithms.
A method for synchronous position control of mechatronic servo system of multiple axes was required to attain a high accuracy performance of contour control in sealing process, laser cutting and so on. Accurate contour control method for multi link robot arm was proposed by using synchronous position of mutual axes. The proposed method was evaluated by experiment of actual multi link robot arm and its simulation study under disturbed torque circumstance, and then showed satisfactory performance.
New methods for teaching and controlling contour-tracking tasks based on contact sensing are developed to improve robotic machining. Accurate contact point detection is introduced to estimating a work contour path with high reliability. The framework is described that control references for a robotic manipulator are calculated from the estimated work contour path and task specifications. Then, teaching methods are developed assuming that the contact sensing method is applied to obtain the work contour path in teaching process and to obtain the path during task execution. These teaching methods reduce the effort for task teaching and improve adaptability to handle changes in task specifications without re-teaching. Real-time contour tracking control using this contact sensing to suppress work uncertainties in process is presented. Experimental results for practical grinding tasks using a 6 DOF manipulator validate our methods.
This paper describes the prototype of an optical range sensor with circular scanning of a beam spot and its fundamental characteristics. The prototype sensor: ORANGES system whose name is an abbreviation of an Optical RANGESensor system provides with a function of an optical range sensing at each point sequentially along a circular path, which is a locus on the surface of an object generating by the circular scanning of a beam spot. ORANGES system is made up by both mechanisms of generating the circular scanning of a laser beam spot and measuring distance to it. The former mechanism is provided for the projection of a laser beam spot onto the surface of object along the circular path. The latter mechanism is constructed by four devices with a collecting cylindrical lens mounted on the PSD sensors at perpendicular intersection each other. Each device is capable of detecting the x or y axis components of a distance from the displacement of the image of a light spot on each PSD sensor by triangulation. The range data are taken from the two devices which are selected by the combination of the x and y axis components in the four devices. Moreover, ORANGES system is capable to do the range sensing from four different directions to the same laser beam spot. The experimental results confirm the accuracy of range as less than ±0.2 [mm] (±3σ, σ: standard deviation, in a dark room) in the range of 30-60 [mm] and also ±2.0 [mm] (±3σ) in the range of 60-100 [mm].
This paper describes a new observation system for monitoring the operation of a micro machine, and a controller based on the visual servoing method for controlling a micro machine. The proposed observation system has only one CCD camera. However, it can be converted into a stereo vision system by using two mirrors. By using the new controller, systems can be controlled based on visual information on image planes rather than in three dimensional space. Our two proposed methods result in a system that is highly accurate, simple and inexpensive. Experimental results on a prototype system reveal the validity and effectiveness of the observation system and the controller based on the visual servoing method.
This paper deals with computerized description of medical care, a new way to support medical information processing, and also shows its feasibility by experimenting visual understanding of medical care by a doctor. Computerized description of medical care supports collecting medical information by sensing and inputting the doctor's behavior automatically into a computer during medical care. The computer senses the doctor's behavior using visual, aural and force sensors, and uses these sensor outputs to understand what care is being performed. The visual understanding of medical care is a core function of the computerized description of medical care. Taking application of medicine to the ear, nose and throat in otolaryngology clinics as a typical example, visual understanding of medical care is successfully performed. The experiment showed that visual behavior understanding is more easily realized by monitoring objects used for behavior than by conventional method of monitoring the persons. Computerized description of medical care in this paper will contribute to improve the quality of medical service by reducing the burden of medical information inputting and recording on doctors.
Several position identification methods have been used for mobile robots. Dead reckoning is a popular method, but is not reliable for the measurement in long distances especially on uneven surfaces because due to the accumulation error of wheel diameter and slippage. The landmark method, which estimates current position relative to landmarks, cannot be used in an uncharted environment. We have proposed a new method called “Cooperative Positioning System (CPS).” For CPS, we divide the robots into two groups, A and B. One group, A, remains stationary and acts as a landmark while group B moves. Group B then stops and acts as a landmark for group A. This “dance” is repeated until the target position is reached. CPS has a far lower accumulation of positioning error than dead reckoning, and can work in three-dimensions which is not possible with dead reckoning. Also, CPS has inherent landmarks and therefore works in uncharted environments. In previous papers, we introduced the second prototype CPS machine model named CPS-II and its experimental result. In this paper, we show the relationship between the configuration of moving robots in CPS-II and its' positioning accuracy using analytical technique and propose optimum moving strategy to minimize positioning error even after robots move long distances.
Calibration is most important to manipulate an object correctly by hand-eye system. In this paper, we propose a method of hand-eye system calibration by using conic pattern, when the cameras are fixed at environment. Hand-eye calibration is implemented by measuring a center of gravity of the conic pattern on a hand of a manipulator by using calibrated stereo vision. Finally, we analyze the error of the calibration.
In this paper, stability analysis and robust control of a force-controlled arm having a rigid tip body, of which the mass centerr lies on the central axis of the arm. We consider link flexibility as uncertainty and derive dynamic equations of the force-controlled arm. As the obtained boundary condition is nonhomogeneous, we introduce a change of variables to derive homogeneous boundary conditions. On the basis of a finite-dimensional modal model of distributed-parameter systems, stability of the force feedback is analyzed by using the root locus technique, and an optimal controller with low-pass property as a robust controller is constructed. Simulations have been carried out.
To realize a haptic function in engineering system, how to decide the motion of a tactile sensor for the sensing purpose is important, as well as the development of tactile sensors. We define such a “motion of tactile sensor for sensing” as a haptic motion in engineering, and argue its necessity in this paper. In particular, we note that a tactile sensor cannot get any information without contact between the sensor and the object, and argue that haptic motions are necessary not only in the case that the object is much bigger than the sensor but in the case that the sensor has the same or much bigger size of the object. As one form of such haptic motions, we propose a haptic motion of distributed tactile sensor for obtaining tactile pattern and analyze our proposed motion mathematically.
In the case of harmonic drive gears the load torque can be sensed from a flexible part of the gear. A method by using strain gages was proposed, but is not widely utilized due to insufficient accuracy and residual ripple. We analyze the errors and propose arrangement of the strain gages which improves both the accuracy and the ripple of the torque sensing to 1% level of the gear torque capacity.
In this research, our goal is that a mobile robot learns to move between subgoals without human interactions. The robot does not have knowledge on the environment: positions of subgoals and obstacles. Unfortunately, the robot learning is unavoidably influenced by the errors in a real world. We suggest how to learn to move efficiently between subgoals by resetting errors. In our system, we define the distinctive place on which the movement control of a robot changes from sensor-based to coordinates-based. The distinctive place is identified only with local information, and the cumulative errors through the movement between two subgoals are reset on distinctive places. First, the robot moves to search for subgoals. Next, it moves between subgoals repeatedly, and learns to move efficiently. We made some experiments in a real environment. In the experiments, we found out that the robot's movement was gradually improved by learning.