Journal of Robotics and Mechatronics
Online ISSN : 1883-8049
Print ISSN : 0915-3942
ISSN-L : 0915-3942
27 巻, 2 号
選択された号の論文の12件中1~12を表示しています
Special Issue on Vision and Motion Control
  • Toyomi Fujita, Takayuki Tanaka, Satoru Takahashi, Hidenori Takauji, Sh ...
    原稿種別: Editorial
    2015 年27 巻2 号 p. 121
    発行日: 2015/04/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    Robot vision is an important robotics and mechatronics technology for realizing intelligent robot systems that work in the real world. Recent improvements in computer processing are enabling environment to be recognized and robot to be controlled based on dynamic high-speed, highly accurate image information. In industrial application, target objects are detected much more robustly and reliably through high-speed processing. In intelligent systems applications, security systems that detect human beings have recently been applied positively in computer vision. Another attractive application is recognizing actions and gestures by detecting human – an application that would enable human beings and robots to interact and cooperate more smoothly when robots observe and assist human partners. This key technology could be used for aiding the elderly and handicapped in practical environments such as hospital, home, and so on.

    This special issue covers topics on robot vision and motion control including dynamic image processing. These articles are certain to be both informative and interesting to robotics and mechatronics researchers. We thank the authors for submitting their work and for assisting during the review process. We also thank the reviewers for their dedicated time and effort.

  • Toshiaki Tsuji, Kunihiro Ogata
    原稿種別: Review
    2015 年27 巻2 号 p. 122-125
    発行日: 2015/04/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    Many efforts are being undertaken in rehabilitation care to improve functions by introducing assist devices. Many such devices make learning more effective by providing the user with augmented feedback on sensor information. Of the several modalities used to achieve this effect, this paper focuses on technological trends in rehabilitation assist devices that use visual feedback. Specifically, the paper deals mainly with devices that use visualization technology to process and display sensor device information.

  • Dong Liang, Shun’ichi Kaneko, Yutaka Satoh
    原稿種別: Paper
    2015 年27 巻2 号 p. 126-135
    発行日: 2015/04/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    An ideal similarity measure for matching image should be discriminative, producing a conspicuous correlation peak and suppressing false local maxima. Image matching tasks in practice, however, often involves complex conditions, such as blurring and fluctuating illumination. These may cause the similarity measure to not be discriminative enough. We utilized a robust scene modeling method to model the appearance of an image and propose an associated similarity measure for image matching. The proposed method utilizes a spatio-temporal learning stage to select a group of supporting pixels for each target pixel, then builds a differential statistic model of them to describe the uniqueness of the spatial structure and to provide illumination invariance for robust matching. We utilized this method for image matching in several challenging environments. Experimental results show that the proposed similarity measure produces explicit correlation peaks to achieve robust image matching.

  • Yuki Okafuji, Takanori Fukao, Hiroshi Inou
    原稿種別: Paper
    2015 年27 巻2 号 p. 136-145
    発行日: 2015/04/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    Recently, various driving support systems have been developed to improve safety. However, because drivers occasionally feel that something is wrong, systems need to be designed based on information that drivers perceive. Therefore, we focused on optical flow, which is one of the visual information used by humans to improve driving feel. Humans are said to perceive the direction of self-motion from optical flow and also utilize it during driving. Applying the optical flow model to automatic steering systems, a human-oriented system might be able to be developed. In this paper, we derive the focus of expansion (FOE) in the frame of a camera that is the direction of self-motion in optical flow and propose a nonlinear control method based on the FOE. The effectiveness of the proposed method was verified through a vehicle simulation, and the results showed that the proposed method simulates human behavior. Based on these results, this approach may serve as a foundation of human-oriented system designs.

  • Kenichi Tokuda, Tatsuya Hirayama, Tetsuya Kinugasa, Takafumi Haji, His ...
    原稿種別: Paper
    2015 年27 巻2 号 p. 146-155
    発行日: 2015/04/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    The covered area detection method, using the brightness change in an image and parallax correction, does not depend on a common field of vision. Camera images become important information when a rescue robot is operated by remote control, but mounting cameras is difficult on a rescue robot crawler that must get into cracks in rubble. We propose attaching cameras behind the crawler shoe. The biggest problem then, however, is that the shoe obstructs large parts of the camera image. To avoid this, we developed real-time image processing that complements the obstructed area through the use of two cameras. We then performed evaluation experiments to confirm the effectiveness of the proposed technique.

  • Daiki Kobayashi, Tomohito Takubo, Atsushi Ueno
    原稿種別: Paper
    2015 年27 巻2 号 p. 156-166
    発行日: 2015/04/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    This paper proposes a model-based 3D footstep planning method. A discrete-time kinematic model, in which vertical motions are independent of horizontal motions, describes the biped walking of the humanoid robot. The 3D field environment is represented by geographical features divided into the meshes, determined from measurements obtained by a sensor, where the inclinations in each mesh are assumed. The optimal plan is obtained by solving a constrained optimization problem based on the foot placements of the model. A goal-tracking evaluation of the problem on horizontal foot placements is carried out to reach the goal, while vertical motions are adopted to meet constraints consisting of the foot workspace and contact with the 3D field surface. A quadratic programming method is implemented to solve the problem based on the humanoid robot NAO in real time.

  • Motomasa Tomida, Kiyoshi Hoshino
    原稿種別: Paper
    2015 年27 巻2 号 p. 167-173
    発行日: 2015/04/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    Operating a robot intentionally by using various complex motions of the hands and fingers requires a system that accurately detects hand and finger motions at high speed. This study uses an ultrasmall camera and compact computer for development of a wearable device of hand pose estimation, also called a hand-capture device. The accurate estimations, however, require data matching with a large database. But a compact computer usually has only limited memory and low machine power. We avoided this problem by reducing frequently used image characteristics from 1,600 dimensions to 64 dimensions of characteristic quantities. This saved on memory and lowered computational cost while achieving high accuracy and speed. To enable an operator to wear the device comfortably, the camera was placed as close to the back of the hand as possible to enable hand pose estimation from hand images without fingertips. A prototype device with a compact computer used to evaluate performance indicated that the device achieved high-speed estimation. Estimation accuracy was 2.32°±14.61° at the PIP joint of the index finger and 3.06°±10.56° at the CM joint of the thumb – as accurate as obtained using previous methods. This indicated that dimensional compression of image-characteristic quantities is important for realizing a compact hand-capture device.

  • Shuichi Akizuki, Manabu Hashimoto
    原稿種別: Paper
    2015 年27 巻2 号 p. 174-181
    発行日: 2015/04/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    This paper introduces a stable 3D object detection method that can be applied to complicated scenes consisting of randomly stacked industrial parts. The proposed method uses a 3D vector pair that consists of paired 3D vectors with a shared starting point. By considering the observability of vector pairs, the proposed method has achieved high recognition performance. The observability factor of the vector pair is calculated by simulating the visible state of the vector pair from various viewpoints. By integrating the observability factor and the distinctiveness factor proposed in our previous work, a few vector pairs that are effective for recognition are automatically extracted from an object model, and then used for the matching process. Experiments have confirmed that the proposed method improves the 88.5% recognition success rate of previous state-of-the-art methods to 93.1%. The processing time of the proposed method is fast enough for robotic bin-picking.

  • Gou Koutaki, Keiichi Uchimura
    原稿種別: Development Report
    2015 年27 巻2 号 p. 182-190
    発行日: 2015/04/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    The authors developed a low-cost, safety shogi robot system. A Web camera installed on the lower frame is used to recognize pieces and their positions on the board, after which the game program is played. A robot arm moves a selected piece to the position used in playing a human player. A fast, robust image processing algorithm is needed because a low-cost wide-angle Web camera and robot are used. The authors describe image processing and robot systems, then discuss experiments conducted to verify the feasibility of the proposal, showing that even a low-cost system can be highly reliable.

Regular papers
  • Fumitaka Hashikawa, Kazuyuki Morioka
    原稿種別: Paper
    2015 年27 巻2 号 p. 191-199
    発行日: 2015/04/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    Intelligent space is one in which many networked sensors are distributed. The purpose of intelligent space is to support information for human beings and robots based on the integration of sensor information. Specifically, to support location-based applications in intelligent space, networked sensors must get locations of human beings or robots. To do so, sensor locations and orientations of sensors must be known in world coordinates. To measure numerous sensor locations accurately by hand, this study focuses on estimating the locations and orientations of distributed sensors in intelligent space – but doing so automatically. We propose map sharing using distributed laser range sensors and a mobile robot to estimate the locations of distributed sensors. Comparing maps of sensor and robots, sensor locations are estimated on a global map built by SLAM of a mobile robot. An ICP matching algorithm is used to improve map matching among sensors and robots. Experimental results with actual distributed sensors and a mobile robot show that the proposed system estimates sensor locations satisfactorily and improve the accuracy of a global map built by SLAM.

  • Yeng Weng Leong, Hiroaki Seki, Yoshitsugu Kamiya, Masatoshi Hikizu
    原稿種別: Paper
    2015 年27 巻2 号 p. 200-207
    発行日: 2015/04/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    Mobile devices are caught in an inverse relationship between mobility and ease of use. This paper presents incremental technology related to a previously proposed method of mobile yet easy-to-use input using triboacoustic signals generated from the action of a user tracing shapes on surfaces in environment. Mobile devices must function in various environments, thus requiring that they be immune to noise interference. We propose an improvement in accuracy and automated multiple sound source segregation supported by experiments evidencing the proposal’s effectiveness, results of which show the proposal’s accuracy and capability have merit and should be pursued further.

  • Huang Xuemei, Su Xinyong, Liu Weihong
    原稿種別: Paper
    2015 年27 巻2 号 p. 208-214
    発行日: 2015/04/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    Recognition of encoded targets on the industrial backgrounds are hot topics in digital close-rang industrial photogrammetry. To recognize the encoded targets, the center circles of targets need to be detected above all. A method was proposed to spot the center circles accurately based on some criteria put forward by authors and was realized by the program developed in MatlabR2010a. Firstly, according to the image that was preprocessed by image-binary threshold and Canny edge detection, straight lines in image were deleted and dimension criteria was applied to obtain possible contours of center circles. Then all these contours were fitted to ellipses based on the Least Square Algorithm and the candidate ellipses were remained except those that did not meet the shape criteria. Finally because of shooting error, all candidate ellipses were modified to obtain the positions of encoded targets’ center circles precisely. In laboratory, an experiment was conducted to test and verify the theory mentioned above and the result showed the robust of the method in this paper.

feedback
Top