Journal of Robotics and Mechatronics
Online ISSN : 1883-8049
Print ISSN : 0915-3942
ISSN-L : 0915-3942
27 巻, 4 号
選択された号の論文の15件中1~15を表示しています
Special Issue on Real World Robot Challenge in Tsukuba - Autonomous Technology for Useful Mobile Robot -
  • Yoshihiro Takita, Shin’ichi Yuta, Takashi Tsubouchi, Koichi Ozaki
    原稿種別: Editorial
    2015 年 27 巻 4 号 p. 317
    発行日: 2015/08/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    The first Tsukuba Challenge started in 2007 as a technological challenge for autonomous mobile robots moving around on city walkways. A task was then added involving the search for certain persons. In these and other ways, the challenge provides a test field for developing positive relationships between mobile robots and human beings. To make progress an autonomous robotic research, this special issue details and clarifies technological problems and solutions found by participants in the challenge.

    We sincerely thank the authors and reviewers for this chance to work with them in these important areas.

  • Shin’ichi Yuta
    原稿種別: Review
    2015 年 27 巻 4 号 p. 318-326
    発行日: 2015/08/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    The Tsukuba Challenge, an open experiment for autonomous mobile robotics researchers, lets mobile robots travel in a real – and populated – city environment. Following the challenge in 2013, the mobile robots must navigate autonomously to their destination while, as the task of Tsukuba Challenge 2014, looking for and finding specific persons sitting in the environment. Total 48 teams (54 robots) seeking success in this complex challenge.

  • Naoki Akai, Kenji Yamauchi, Kazumichi Inoue, Yasunari Kakigi, Yuki Abe ...
    原稿種別: Paper
    2015 年 27 巻 4 号 p. 327-336
    発行日: 2015/08/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    Held in Japan every year since 2007, the Real World Robot Challenge (RWRC) is a technical challenge for mobile robots. Every robot is given the missions of traveling a long distance and finding specific persons autonomously. The robots must also have an affinity for people and be remotely monitored. In order to complete the missions, we developed a new mobile robot, SARA, which we entered in RWRC 2014. The robot successfully completed all of the missions of the challenge. In this paper, the systems we implemented are detailed. Moreover, results of experiments and of the challenge are presented, and knowledges we gained through the experience are discussed.

  • Shinya Ohkawa, Yoshihiro Takita, Hisashi Date, Kazuhiro Kobayashi
    原稿種別: Paper
    2015 年 27 巻 4 号 p. 337-345
    発行日: 2015/08/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    This paper is discusses an autonomous mobile robot entered in the Real World Robotics Challenges 2014 (RWRC) in Tsukuba. Our project was to develop a wheelchair able to navigate stairs autonomously. Step 1 develops a center articulated vehicle, called the AR Chair, which has 4 wheels and a controller including LIDARs. The center articulated vehicle has a stiff structure and travels with the front and rear wheels on the same path, so there is no inner wheels difference. The robotic vehicle carries users weighing up to 100 kg. The autonomous controller is the same as the Smart Dump 7 combined with the RWRC 2013 to achieve the challenge, excluding the geometrical relationship of the steering angle and communication command for motor drivers to the AR Chair. The advantage of the robot is shown by experimental data from the RWRC 2014’s final run.

  • Junji Eguchi, Koichi Ozaki
    原稿種別: Paper
    2015 年 27 巻 4 号 p. 346-355
    発行日: 2015/08/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    We describe a navigation method for autonomous mobile robots and detail knowledge obtained through Tsukuba Challenge 2014 trial runs. The challenge requires robots to navigate autonomously 1.4 km in an urban area and to search for five persons in three areas. Accurate maps are important tools in localization on complex courses in autonomous outdoor navigation. We constructed an occupancy grid map using laser scanners, gyro-assisted odometry and a differential global positioning system (DGPS). In this study, we use maps as a graphical interface. Namely, by using maps, we give environmental information, untravelable low objects such as curb stones, and areas in nonsearches for “target” persons. For the purpose of increasing the map reusability, we developed a waypoint editor, which can modify waypoints on maps to fit a course to a situation. We also developed a velocity control method that the robot uses to follow pedestrians and other robot by keeping safety distance on the course. As a result, our robot took part five of seven official trial runs to get to the goal. This indicates that the autonomous navigation method was stable in the Tsukuba Challenge 2014 urban environment.

  • Masatoshi Nomatsu, Youhei Suganuma, Yosuke Yui, Yutaka Uchimura
    原稿種別: Paper
    2015 年 27 巻 4 号 p. 356-364
    発行日: 2015/08/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    In describing real-world self-localization and target-search methods, this paper discusses a mobile robot developed to verify a method proposed in Tsukuba Challenge 2014. The Tsukaba Challenge course includes promenades and parks containing ordinary pedestrians and bicyclists that require the robot to move toward a goal while avoiding the moving objects around it. Common self-localization methods often include 2D laser range finders (LRFs), but such LRFs do not always capture enough data for localization if, for example, the scanned plane has few landmarks. To solve this problem, we used a three-dimensional (3D) LRF for self-localization. The 3D LRF captures more data than the 2D type, resulting in more robust localization. Robots that provide practical services in real life must, among other functions, recognize a target and serve it autonomously. To enable robots to do so, this paper describes a method for searching for a target by using a cluster point cloud from the 3D LRF together with image processing of colored images captured by cameras. In Tsukuba Challenge 2014, the robot we developed providing the proposed methods completed the course and found the targets, verifying the effectiveness of our proposals.

  • Kenji Yamauchi, Naoki Akai, Koichi Ozaki
    原稿種別: Paper
    2015 年 27 巻 4 号 p. 365-373
    発行日: 2015/08/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    Extracting the color of a target object from images in environments with different illumination conditions, such as outdoors, is difficult because color performance changes easily. The novel color extraction we propose enables the exact color of a target object to be extracted using multiple photographs taken with different exposure times. The object’s color performance transits due to changes in exposure time. This transition is the same as when environmental light sources do not change significantly. In outdoor environment, most situations are regarded as that situation. We first indicate this in an experimental analysis, then detail our proposal. Our method evaluates transition and realizes precise color extraction of target objects in outdoors. We apply this method to an orange cap in the Tsukuba Real-World Robot Challenge. Through experiments, we show that the cap is detected accurately in different environments and discuss the method’s effectiveness and usefulness in the real world.

  • Kento Hosaka, Tetsuo Tomizawa
    原稿種別: Paper
    2015 年 27 巻 4 号 p. 374-381
    発行日: 2015/08/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    The purpose of this study is to develop a system for detecting target persons using a 3D laser scanner. The system consists of two parts – one for grouping and one for determining targets. The grouping part effectively segments individual objects by using two-step grouping. The target part determines target persons for grouping results using shape features. Experimental results showed that our proposed system detects targets as well as existing methods do and that our proposed method performs more quickly than existing methods do.

  • Hideyuki Saito, Kazuyuki Kobayashi, Kajiro Watanabe, Tetsuo Kinoshita
    原稿種別: Paper
    2015 年 27 巻 4 号 p. 382-391
    発行日: 2015/08/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    The perception of color by the human eye is different from that of cameras. This is due to the optical illusion and color constancy characteristics of human vision. In spite of these characteristics, people can drive cars without the danger of overturning. In this paper, we describe a new white lane detection algorithm for autonomous mobile robots, one based on a method similar to the color perception of human beings. In order to drive safely, we emulate human color perception to reduce the effects of lighting and shadow on the course. The validity of the proposed image compensation method is confirmed by actual while line detection.

  • Keita Kurashiki, Mareus Aguilar, Sakon Soontornvanichkit
    原稿種別: Paper
    2015 年 27 巻 4 号 p. 392-400
    発行日: 2015/08/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    Autonomous mobile robots has been an active research recently. In Japan, the Tsukuba Challenge is held annually since 2007 in order to realize autonomous mobile robots that coexist with human beings safely in society. Through technological incentives of such effort, laser range finder (LRF) based navigation has rapidly improved. A technical issue of these techniques is the reduction of the prior information because most of them require precise 3D model of the environment, that is poor in both maintainability and scalability. On the other hand, in spite of intensive studies on vision based navigation using cameras, no robot in the Challenge could achieve full camera navigation. In this paper, an image based control law to follow the road boundary is proposed. This method is a part of the topological navigation to reduce prior information and enhance scalability of the map. As the controller is designed based on the interaction model of the robot motion and image feature in the front image, the method is robust to the camera calibration error. The proposed controller is tested through several simulations and indoor/outdoor experiments to verify its performance and robustness. Finally, our results in Tsukuba Challenge 2014 using the proposed controller is presented.

  • Yusuke Fujino, Kentaro Kiuchi, Shogo Shimizu, Takayuki Yokota, Yoji Ku ...
    原稿種別: Paper
    2015 年 27 巻 4 号 p. 401-409
    発行日: 2015/08/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    The method we propose for constructing a large three-dimensional (3D) map uses an autonomous mobile robot whose navigation system enables the map to be constructed. Maps are vital to autonomous navigation, but constructing and updating them while ensuring that they are accurate is challenging because the navigation system usually requires accurate maps. We propose a navigation system that explores areas not explored before. The proposed system mainly uses LIDARs for determining its own position – a process known as localization – or the environment around the robot – a process known as environment recognition – for creating local maps and for avoiding mobile objects – a process known as motion planning. We constructed a detailed 3D map automatically using autonomous driving data to improve navigation accuracy without increasing the operator’s workload, confirming the feasibility of the proposed method through experiments.

  • Masashi Yokozuka, Osamu Matsumoto
    原稿種別: Paper
    2015 年 27 巻 4 号 p. 410-418
    発行日: 2015/08/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    This paper studies an accurate localization method to make maps for mobile robots using odometry and a global positioning system (GPS) without scan matching. We investigate requirements for GPS accuracy in map-making. To generate accurate maps, SLAM techniques such as scan matching are used to obtain accurate positions. Scan matching is unstable, however, in complex environments and has a high computation cost. To avoid these problems, we studied accurate localization without scan matching. Loop closing is an important property in generating consistent maps. Inconsistencies in maps prevent correct routes to destinations from being generated. Basically, our method adds scan data to a map along a trajectory given by odometry. Odometry accumulates errors due, e.g., to wheel slippage or wheel diameter variations. To remove this accumulated error, we used bundle adjustment, introducing two types of processing. The first is a simple manual input moving a robot to a same position at start and end. This is equal that a robot returns to a start position at end. The second process uses a GPS device to improve map accuracy. Results of experiments showed that an accurate map is generated by using wheel-encoder odometry and a low-cost GPS device. Results were evaluated using a real-time kinematic (RTK) GPS device whose accuracy is within a few centimeters.

Regular papers
  • M. Reza Motamedi, David Florant, Vincent Duchaine
    原稿種別: Paper
    2015 年 27 巻 4 号 p. 419-429
    発行日: 2015/08/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    This paper presents a novel wearable haptic device that provides the user with knowledge of a vertical force, measured at the fingertips, by applying pressure at three different locations on the user’s body. Human prehension and manipulation abilities rely on the ability to convert tactile information into controlled actions, such as the regulation of gripping force. Current upper-limb prosthetics are able to partially replicate the mechanical functions of the human hand, but most do not provide any sensory information to the user. This greatly affects amputees, as they must rely solely on their vision to perform grasping actions. Our device uses a twisted wire actuator to convert rotational motion into linear displacement, which allows the device to remain compact and light-weight. In the past, the main shortcoming of this type of actuator was its limited linear range of motion; but with a slight modification of the principle, we have extended our actuator’s linear range of motion by 40%. In this paper, we present the design of our haptic device, the kinematic and dynamic modelling of the actuator, and the results of the experiments that were used to validate the system’s functionality.

  • Jun Chen, Qingyi Gu, Tadayoshi Aoyama, Takeshi Takaki, Idaku Ishii
    原稿種別: Paper
    2015 年 27 巻 4 号 p. 430-443
    発行日: 2015/08/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    We present a blink-spot projection method for observing moving three-dimensional (3D) scenes. The proposed method can reduce the synchronization errors of the sequential structured light illumination, which are caused by multiple light patterns projected with different timings when fast-moving objects are observed. In our method, a series of spot array patterns, whose spot sizes change at different timings corresponding to their identification (ID) number, is projected onto scenes to be measured by a high-speed projector. Based on simultaneous and robust frame-to-frame tracking of the projected spots using their ID numbers, the 3D shape of the measuring scene can be obtained without misalignments, even when there are fast movements in the camera view. We implemented our method with a high-frame-rate projector-camera system that can process 512×512 pixel images in real-time at 500 fps to track and recognize 16×16 spots in the images. Its effectiveness was demonstrated through several 3D shape measurements when the 3D module was mounted on a fast-moving six-degrees-of-freedom manipulator.

  • Tetsuya Kinugasa, Takashi Ito, Hiroaki Kitamura, Kazuhiro Ando, Shinsa ...
    原稿種別: Paper
    2015 年 27 巻 4 号 p. 444-452
    発行日: 2015/08/20
    公開日: 2019/07/01
    ジャーナル オープンアクセス

    In the last two decades, passive dynamic walking (PDW) has attracted considerable research attention. The assumption that biped walking is based on PDW is now widely accepted. PDW bipeds change their gait to adapt to changes in body configuration and environment, and their efficiency is extremely high. PDW is generally difficult to realize because it lacks robustness. This means, for one thing, that biped walkers capable of PDW must be designed carefully. Once realized, however, the PDW bipeds are expected to have a suitable configuration for biped walking. This study then aims to analyze the fundamental properties of 3D PDW with flat soles and ankle springs and to use these properties to extend it to a horizontal surface. First, we develop a stable and relatively robust 3D PDW biped that has flat soles and ankle springs. Some sensors are implemented in order to measure the posture and the ZMP trajectory. Next, we investigate the relationship between its gait and the CoM position of each leg. We then attach a ballast to the leg to change the CoM, which effectively results in walking stability. Finally, a 3D active biped, whose knees consist of telescopic joints, is developed based on the 3D PDW biped.

feedback
Top