-
[in Japanese]
1994 Volume 12 Issue 5 Pages
645
Published: July 15, 1994
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
-
Ren C. Luo
1994 Volume 12 Issue 5 Pages
646-649
Published: July 15, 1994
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
-
Masatoshi Ishikawa, Hiro Yamasaki
1994 Volume 12 Issue 5 Pages
650-655
Published: July 15, 1994
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
-
Motoyuki Akamatsu, Takeshi Kasai
1994 Volume 12 Issue 5 Pages
656-663
Published: July 15, 1994
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
-
Yoshinori Yamaguchi, Kenji Toda, Kenji Nishida, Eiichi Takahashi
1994 Volume 12 Issue 5 Pages
664-671
Published: July 15, 1994
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
-
Hideto Ide, Masafumi Uchida, Syuichi Yokoyama
1994 Volume 12 Issue 5 Pages
672-676
Published: July 15, 1994
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
In this investigation, we studied the utilty of electric or mechanical stimuli as visual substitutes. We used mechanical and electric stimuli to present characters and colors respectively. For the reduction of learning time and the increase of recognition rate, we devised an apparatus which is connected to a microcomputer and can present Japanese sentences, including Chinese characters, with a 10 × 10 array of 100 vibrators.Color recognition is achieved by using electrical stimuli. Electric stimuli were given with two Ag-AgCl electrodes attached to the root of middle finger. Long pulse (feels warm) stands for red, short pulse (feels cold) stands for blue and a sequence of 10 pulses per second (feels like twitter) stands for green. The current value was 200 μA.
In this investigation, we examined 25 subjects. The recognition rate was 100% for most three color (red, blue and green) .
View full abstract
-
Shigeyuki Sakane, Tomohiko Ishikawa, Tomomasa Sato
1994 Volume 12 Issue 5 Pages
677-684
Published: July 15, 1994
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
Robots require various techniques of sensor fusion to effectively achieve manipulation tasks. This paper presents a sensor fusion method of vision and force information to estimate contact position between a grasped object and other object in the environment. This technique plays an important role especially in assembly tasks since manipula-tion of an object only using visual information often falls into difficulties because of occlusions caused by the grasped object, surrounding objects, and the manipulator itself. In such situations, force sensor information helps to estimate the contact position even when the exact contact position is invisible. Consequently, sensor fusion of vision and force allows us to improve adaptability of robot systems to changing situations in the task. Experiments using actual robot system demonstrate usefulness of the proposed method. We will also discuss research issues on sensor planning to select sensor information taking into account the fusion of vision and force.
View full abstract
-
Shigemi Nagata, Daiki Masumoto, Hiroshi Yamakawa, Takashi Kimoto
1994 Volume 12 Issue 5 Pages
685-694
Published: July 15, 1994
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
Human beings perceive the physical world by integrating various sensory inputs, information about own motor system and their knowledge. To process sensory information, the human brain has a hierarchical parallel distributed processing mechanism. Sensor fusion technology focuses on simulating the brain's sensory information processing mechanism and is intended for advanced sensing systems which cannot be constructed with unimodal sensory information processing.
Our study of sensor fusion aims to develop a hierarchical sensory-motor fusion mechanism for achieving intentional sensing, which is a concept that sensing has a goal of perception (or intention of sensing) and sensing behaviors must be oriented to achieve the goal.
In this paper, we propose a hierarchical sensory-motor fusion model with neural networks for intentional sensing, and also propose an iterative inversion method which takes advantage of multi-layer neural networks as a solution to the ill-posed inverse problem. We applied the hierarchical sensory-motor fusion model to a three-dimensional object recognition system and a vision-based robot arm control system, and demonstrated the effectiveness of the proposed model by computer simulations. We confirmed that the model accepts and propagates intentions, enables to tightly couple recognition and action, and can perform various tasks without rebuilding or relearning the sensing system.
View full abstract
-
—A Bayesian Fusion Method Using Internal Sensory Data and Knowledge about Work Space—
Yojiro Tonouchi, Takashi Tsubouchi, Suguru Arimoto
1994 Volume 12 Issue 5 Pages
695-699
Published: July 15, 1994
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
A conventional dead-reckoning that estimates a mobile robot position induces cumulative errors intrinsically. Nonzero probability of its error distribution is occasionally found even outside the work space, because the dead-reckoningtakes no account of the closed extent of the work space. This paper presents a new estimation algorithm for deadreckoning that fuses the knowledge of the closed work space with position estimates by means of Bayesian inference.This new algorithm produces zero probability outside the work space theoretically. Through a computer simulation, the effectiveness of the proposed algorithm is confirmed.
View full abstract
-
Yasuhiro Taniguchi, Yoshiaki Shirai, Minoru Asada
1994 Volume 12 Issue 5 Pages
700-707
Published: July 15, 1994
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
In this paper, we propose a scene interpretation system which makes use of intermediate results of multiple visual sensor information processing for the efficient object recognition. By using the intermediate results, the scene interpretation starts just after the neccessary information is obtained. Therefore, the interpretation can be done in ealy stage and the total computation time can be reduced. Since the multi-stage stereo method needs longer time to obtain the range information than color image segmentation, the intermediate result of the stereo method is sent to a fusion process in order to make the interpretation process efficient. The fusion process begins object matching when the necessary information for object recognition is obtained. We apply the method to scenes including cars and obstacles on the ground, and show to which extent we can reduce the computation time by the use of the intermediate results.
View full abstract
-
Yutaka Sakaguchi
1994 Volume 12 Issue 5 Pages
708-714
Published: July 15, 1994
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
In understanding circumstances, we human beings direct our attention to appropriate information sources and collect necessary information according to our purpose. The author formalized such an attentional perception process as a sequential experimental design based on an information criterion and described its concrete algorithm. In the present article, the author develops the algorithm with an idea of prediction so as to estimate a time-variant object's state efficiently. The algorithm is to predict the object's state using the internal state transition model and to observe the object with the most informative sensor when the ambiguity of the prediction exceeds a specified limit. The author applies the algorithm to a problem of estimating the position of a moving target with observing it by a camera at times, and investigates its behavior through numerical experiments. The result shows that the system turns the camera's visual field to proper directions at proper times and estimates the target's position with specified accuracy by fewer observations.
View full abstract
-
Toshiharu Mukai, Masatoshi Isikawa
1994 Volume 12 Issue 5 Pages
715-721
Published: July 15, 1994
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
An active sensing method for sensor fusion systems with sensors and actuators is proposed. To realize active sensing with multi-sensors, (1) where to move sensors, (2) how to associate data and (3) how to fuse data should be solved. Authors propose a new method mainly concerning (1) . The method utilizes estimated errors of estimated values to determine sensor locations where useful data can be obtained and data can be effectively associated. An algorithm to calculate nearly optimal sensor locations, instead of optimal locations, is also proposed to reduce calculation. To provide examples, the active sensing method is applied to target tracking by a system with two handeye cameras on manipulators and vision and touch fusion by a system with a camera and a tactile sensor. By using this method, sensing strategy is constructed according to what to be measured.
View full abstract
-
Teruo Yamaguchi, Kota Takahashi, Hiro Yamasaki
1994 Volume 12 Issue 5 Pages
722-728
Published: July 15, 1994
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
It will be necessary for the advanced visual sensing system operating in a structurally unconstrained environment to have the ability to realize visual stabilization and target gazing. In this paper, a new method of acquiring both of them by sensor fusion technique is proposed.
It is important to choose an appropriate sensor fusion basis, and it is proposed to integrate the visual system with angular velocity sensors and the gaze control system by a unique constraint condition which governs the relation between measured and control variables.
It is also proposed that such a visual sensing system is realized by integrating the systems through their (angular) velocity information. Because the velocity field in the image can be now calculated in real time and the angular velocity can be treated directly by both rate gyroscope and gaze control motor, such an integration is expected to fit for the visual system which requires quick response. The relation between the proposed system and “intentional sensing” is also discussed.
View full abstract
-
Suguru Arimoto
1994 Volume 12 Issue 5 Pages
729-735
Published: July 15, 1994
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
-
[in Japanese]
1994 Volume 12 Issue 5 Pages
736
Published: July 15, 1994
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
-
[in Japanese]
1994 Volume 12 Issue 5 Pages
739
Published: July 15, 1994
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
-
Takafumi Matsumaru, Nobuto Matsuhira
1994 Volume 12 Issue 5 Pages
743-750
Published: July 15, 1994
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
This paper describes the development of Windshield Cleaning Robot System : WSC. This system is intended to be used for Boeing 747, commonly called jumbo jet, parked at airports prior to service. The objects to be cleaned are spots on windshields caused by collision with dusts, insects, and birds during takeoff and landing. The intention of this new system is that one operator would perform the whole work in 10 minutes. So the system consists of the manipulator (the arm and the cleaning device), the installation unit, the control unit, and the operation unit. A position and force control method is applied to this system. The target position of the arm tip is modified using the signals from the force sensor and the joystick. In accordance with this control method, the pressure force is kept constant and the tip is moved so as to follow the shape of the windshields. The various safety features provided include interference limit to limit the area of movement. System experiments were carried out and the effectivity applying lightweight manipulator with long arms to this work was confirmed.
View full abstract
-
Toru Omata, Kazuyuki Nagata, Shigenobu Iwatsuki, Masayoshi Kakikura
1994 Volume 12 Issue 5 Pages
751-758
Published: July 15, 1994
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
A multifingered hand can reorient an object by regrasping it in a hand. This paper discusses a twirling rotation of a prism and develops a planner which searches for a sequence of repositioning of fingers and their new contact positions. During the regrasp motion, the remaining fingers have to grasp the object. We have already developed an algorithm for computing finger contact positions where the fingers can maintain equilibrium. The algorithm can compute a new contact position of a repositioned finger where there is a finger which can be repositioned next (onestep regrasp problem), and also where there is a finger repositioned next after next (two-step regrasp problem) . The two-step regrasp problem requires more time to solve than the one-step regrasp problem. So, the planner basically solves the onestep regrasp problem sequentially and solves the two-step regrasp problem when there is no solution to the one-step regrasp problem. Examples show that the planner with this strategy efficiently finds a sequence of repositioning of fingers to reorient a prism object.
View full abstract
-
Shigeo Hirose, Edwardo F. Fukushima, Shin'ichi Tsukagoshi
1994 Volume 12 Issue 5 Pages
759-765
Published: July 15, 1994
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
This paper investigates an optimal steering control method for the articulated body mobile robot KORYU-II (KR-II), considering energy consumption and trajectory tracking performance as the optimization criterions. The computer simulation results of the basic control methods of KR-II's θ-axis (bending motion between the segments) and s-axis (rotation motion of the wheels), yield to the conclusion that the best methods are, the “moving average shift method : θ2” combined with the “position control with small proportional gain method : θ4” for the θ-axis, and the “torque control method :
s3” for the s-axis. The θ2 method is to take the average value of KR-II's foremost segment's control angle θ
0 over the time to travel distance
L (a segment center to center distance) as the next segment command θ
1, and shift the command θ
1 to the following segments according to the moved distance. The θ4 method is to set the gain much smaller than the conventional gain value. The
s3 method is to control the velocity of the robot by equally distributed torque command for all the wheels. The experiment by the mechanical model KR-II revealed that, although the trajectory tracking performance is somewhat deteriorated, the introduced control greatly reduce the energy consumption and produce very smooth locomotion.
View full abstract
-
Kiyoshi Nagai, Tsuneo Yoshikawa
1994 Volume 12 Issue 5 Pages
766-772
Published: July 15, 1994
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
An impedance control scheme of redundant macro micro manipulators is proposed. In this control scheme, we can specify not only the desired mechanical impedance of the end effector, but also that of the macro manipulator by considering the internal force applied at the top of the macro manipulator. This control scheme can utilize both of the merits of the macro and micro manipulators. That is, by specifying a suitable set of the desired mechanical impedance, a compliant motion can be realized without any excessive joint torque of the macro manipulator, and a wide motion range of the macro manipulator is used effectively to compensate a narrow motion range of the micro manipulator. The validity of the proposed control scheme is shown by several simulation results.
View full abstract
-
Koichi Hashimoto, Takumi Ebine, Hidenori Kimura
1994 Volume 12 Issue 5 Pages
773-778
Published: July 15, 1994
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
The visual servoing system is composed of an object and an eye-in-hand robot. The object is moving around the robot work space and the robot is tracking the object by using a visual sensor mounted on the hand. This paper proposes a control theoretic formulation for the visual servoing system. The system is modeled by the perspective transformation of the camera and the kinematic transformation of the robot. The system is linearized at the reference point yielding an MIMO time-invariant model. An optimal control approach is proposed to design a robust feedback controller. Controllability and stability of the system is also discussed. Realtime experiments on PUMA560 are carried out to evaluate and compare the proposed approach with previously proposed algorithms.
View full abstract