Journal of Robotics and Mechatronics
Online ISSN : 1883-8049
Print ISSN : 0915-3942
ISSN-L : 0915-3942
32 巻, 6 号
選択された号の論文の20件中1~20を表示しています
Special Issue on Real World Robot Challenge in Tsukuba and Osaka
  • Hisashi Date, Tomohito Takubo
    原稿種別: Editorial
    2020 年 32 巻 6 号 p. 1103
    発行日: 2020/12/20
    公開日: 2020/12/20
    ジャーナル オープンアクセス

    The Tsukuba Challenge is an open experiment of autonomous mobile robots in the real world. In its third stage since 2018, it is now to be held on a new course that starts at the Tsukuba City Hall. New tasks that require functions expected for autonomous travel in the real world have now been added, including passing checkpoints announced a day before the event, starting two vehicles simultaneously, traveling in an unmeasured environment, and strictly observing stop lines in the course. Also, in the spirit of the Tsukuba Challenge, the Nakanoshima Challenge, an open demonstration experiment project, has been held in the city of Osaka since 2018. As the only event in which autonomous mobile robots travel in the urban area of Osaka, the Nakanoshima Challenge is expected to identify new issues peculiar to autonomous navigation in real urban environments and to find solutions to them.

    This special issue includes a review paper on the Tsukuba Challenge, four research papers on the results of experiments done in the Tsukuba Challenge, four research papers related to the Nakanoshima Challenge, and three development reports. This special issue provides its readers with the frontline issues and the current status of development of autonomous mobile robots in real-world environments. We hope that the innovative efforts presented in this special issue will contribute to the development of science and industry.

  • Yoshitaka Hara, Tetsuo Tomizawa, Hisashi Date, Yoji Kuroda, Takashi Ts ...
    原稿種別: Review
    2020 年 32 巻 6 号 p. 1104-1111
    発行日: 2020/12/20
    公開日: 2020/12/20
    ジャーナル オープンアクセス

    This paper overviews Tsukuba Challenge 2019. The Tsukuba Challenge is an experiment for autonomous navigation of mobile robots on public walkways. Navigation tasks through pedestrian paths in the city are given. Participating teams develop their own robot hardware and software. We describe the aim of the task settings and the analysis of the experimental results for all the teams. We studied the records of real-world experiments of Tsukuba Challenge 2019.

  • Kazuki Takahashi, Jumpei Arima, Toshihiro Hayata, Yoshitaka Nagai, Nao ...
    原稿種別: Paper
    2020 年 32 巻 6 号 p. 1112-1120
    発行日: 2020/12/20
    公開日: 2020/12/20
    ジャーナル オープンアクセス

    In this study, a novel framework for autonomous robot navigation system is proposed. The navigation system uses an edge-node map, which is easily created from electronic maps. Unlike a general self-localization method using an occupancy grid map or a 3D point cloud map, there is no need to run the robot in the target environment in advance to collect sensor data. In this system, the internal sensor is mainly used for self-localization. Assuming that the robot is running on the road, the position of the robot is estimated by associating the robot’s travel trajectory with the edge. In addition, node arrival determination is performed using branch point information obtained from the edge-node map. Because this system does not use map matching, robust self-localization is possible, even in a dynamic environment.

  • Yusuke Mori, Katashi Nagao
    原稿種別: Paper
    2020 年 32 巻 6 号 p. 1121-1136
    発行日: 2020/12/20
    公開日: 2020/12/20
    ジャーナル オープンアクセス

    To solve the problem of autonomously navigating multiple destinations, which is one of the tasks in the Tsukuba Challenge 2019, this paper proposes a method for automatically generating the optimal travel route based on costs associated with routes. In the proposed method, the route information is generated by playing back the acquired driving data to perform self-localization, and the self-localization log is stored. In addition, the image group of road surfaces is acquired from the driving data. The costs of routes are generated based on texture analysis of the road surface image group and analysis of the self-localization log. The cost-added route information is generated by combining the costs calculated by the two methods, and by assigning the combined costs to the route. The minimum-cost multidestination route is generated by conducting a route search using cost-added route information. Then, we evaluated the proposed method by comparing it with the method of generating the route using only the distance cost. The results confirmed that the proposed method generates travel routes that account for safety when the autonomous wheelchair is being driven.

  • Ryusuke Miyamoto, Miho Adachi, Hiroki Ishida, Takuto Watanabe, Kouchi ...
    原稿種別: Paper
    2020 年 32 巻 6 号 p. 1137-1153
    発行日: 2020/12/20
    公開日: 2020/12/20
    ジャーナル オープンアクセス

    The most popular external sensor for robots capable of autonomous movement is 3D LiDAR. However, cameras are typically installed on robots that operate in environments where humans live their daily lives to obtain the same information that is presented to humans, even though autonomous movement itself can be performed using only 3D LiDAR. The number of studies on autonomous movement for robots using only visual sensors is relatively small, but this type of approach is effective at reducing the cost of sensing devices per robot. To reduce the number of external sensors required for autonomous movement, this paper proposes a novel visual navigation scheme using only a monocular camera as an external sensor. The key concept of the proposed scheme is to select a target point in an input image toward which a robot can move based on the results of semantic segmentation, where road following and obstacle avoidance are performed simultaneously. Additionally, a novel scheme called virtual LiDAR is proposed based on the results of semantic segmentation to estimate the orientation of a robot relative to the current path in a traversable area. Experiments conducted during the course of the Tsukuba Challenge 2019 demonstrated that a robot can operate in a real environment containing several obstacles, such as humans and other robots, if correct results of semantic segmentation are provided.

  • Susumu Tarao, Yasunori Fujiwara, Naoaki Tsuda, Soichiro Takata
    原稿種別: Paper
    2020 年 32 巻 6 号 p. 1154-1163
    発行日: 2020/12/20
    公開日: 2020/12/20
    ジャーナル オープンアクセス

    In the near future, autonomous mobile robots are expected to operate effectively in various locations, such as in living spaces as well as industrial establishments. Against this background, a new autonomous mobile robot platform was designed and prototyped in this research. For simplicity of design and easy assembly of the drive units, a robot with two low-end in-wheel motors is considered. It is also effective in saving space, and can be used for high-power operations and travelability in various road surface conditions. This paper presents a concept for developing a new type of autonomous mobile robot platform, its control system for autonomous operation, actual prototyping using this platform, and sample applications of this platform.

  • Renato Miyagusuku, Yuki Arai, Yasunari Kakigi, Takumi Takebayashi, Aki ...
    原稿種別: Paper
    2020 年 32 巻 6 号 p. 1164-1172
    発行日: 2020/12/20
    公開日: 2020/12/20
    ジャーナル オープンアクセス

    The practical application of robotic technologies can significantly reduce the burden on human workers, which is particularly important when considering the declining birthrates and aging populations in Japan and around the world. In this paper, we present our work toward realizing one such application, namely outdoor autonomous garbage collection robots. We address issues related to outdoor garbage recognition and autonomous navigation (mapping, localization, and re-localization) in crowded outdoor environments and areas with different terrain elevations. Our approach was experimentally validated in real urban settings during the Nakanoshima Challenge and Nakanoshima Challenge – Extra Challenge, where we managed to complete all tasks.

  • Shunya Hara, Toshihiko Shimizu, Masanori Konishi, Ryotaro Yamamura, Sh ...
    原稿種別: Paper
    2020 年 32 巻 6 号 p. 1173-1182
    発行日: 2020/12/20
    公開日: 2020/12/20
    ジャーナル オープンアクセス

    The Nakanoshima Challenge is a contest for developing sophisticated navigation systems of robots for collecting garbage in outdoor public spaces. In this study, a robot named Navit(oo)n is designed, and its performance in public spaces such as city parks is evaluated. Navit(oo)n contains two 2D LiDAR scanners with uniaxial gimbal mechanism, improving self-localization robustness on a slope. The gimbal mechanism adjusts the angle of the LiDAR scanner, preventing erroneous ground detection. We evaluate the navigation performance of Navit(oo)n in the Nakanoshima and its Extra Challenges.

  • Yuichi Tazaki, Yasuyoshi Yokokohji
    原稿種別: Paper
    2020 年 32 巻 6 号 p. 1183-1192
    発行日: 2020/12/20
    公開日: 2020/12/20
    ジャーナル オープンアクセス

    In this paper, an autonomous navigation method that utilizes proximity points of 3D range data is proposed for use in mobile robots. Some useful geometric properties of proximity points are derived, and a computationally efficient algorithm for extracting such points from 3D pointclouds is presented. Unlike previously proposed keypoints, the proximity point does not require any computationally expensive analysis of the local curvature, and is useful for detecting reliable keypoints in an environment where objects with definite curvatures such as edges and flat surfaces are scarce. Moreover, a particle-filter-based self-localization method that uses proximity points for a similarity measure of observation is presented. The proposed method was implemented in a real mobile robot system, and its performance was tested in an outdoor experiment conducted during Nakanoshima Challenge 2019.

  • Shunya Tanaka, Yuki Inoue
    原稿種別: Paper
    2020 年 32 巻 6 号 p. 1193-1199
    発行日: 2020/12/20
    公開日: 2020/12/20
    ジャーナル オープンアクセス

    An omnidirectional camera can simultaneously capture all-round (360°) environmental information as well as the azimuth angle of a target object or person. By configuring a stereo camera set with two omnidirectional cameras, we can easily determine the azimuth angle of a target object or person per camera on the image information captured by the left and right cameras. A target person in an image can be localized by using a region-based convolutional neural network and the distance measured by the parallax in the combined azimuth angles.

  • Jingwei Xue, Zehao Li, Masahito Fukuda, Tomokazu Takahashi, Masato Suz ...
    原稿種別: Development Report
    2020 年 32 巻 6 号 p. 1200-1210
    発行日: 2020/12/20
    公開日: 2020/12/20
    ジャーナル オープンアクセス

    Object detectors using deep learning are currently used in various situations, including robot demonstration experiments, owing to their high accuracy. However, there are some problems in the creation of training data, such as the fact that a lot of labor is required for human annotations, and the method of providing training data needs to be carefully considered because the recognition accuracy decreases due to environmental changes such as lighting. In the Nakanoshima Challenge, an autonomous mobile robot competition, it is challenging to detect three types of garbage with red labels. In this study, we developed a garbage detector by semi-automating the annotation process through detection of labels using colors and by preparing training data by changing the lighting conditions in three ways depending on the brightness. We evaluated the recognition accuracy on the university campus and addressed the challenge of using the discriminator in the competition. In this paper, we report these results.

  • Tomohiro Umetani, Yuya Kondo, Takuma Tokuda
    原稿種別: Development Report
    2020 年 32 巻 6 号 p. 1211-1218
    発行日: 2020/12/20
    公開日: 2020/12/20
    ジャーナル オープンアクセス

    Automated mobile platforms are commonly used to provide services for people in an intelligent environment. Data on the physical position of personal electronic devices or mobile robots are important for information services and robotic applications. Therefore, automated mobile robots are required to reconstruct location data in surveillance tasks. This paper describes the development of an autonomous mobile robot to achieve tasks in intelligent environments. In particular, the robot constructed route maps in outdoor environments using laser imaging detection and ranging (LiDAR), and RGB-D sensors via simultaneous localization and mapping. The mobile robot system was developed based on a robot operating system (ROS), reusing existing software. The robot participated in the Nakanoshima Challenge, which is an experimental demonstration test of mobile robots in Osaka, Japan. The results of the experiments and outdoor field tests demonstrate the feasibility of the proposed robot system.

  • Masahito Fukuda, Tomokazu Takahashi, Masato Suzuki, Yasushi Mae, Yasuh ...
    原稿種別: Development Report
    2020 年 32 巻 6 号 p. 1219-1228
    発行日: 2020/12/20
    公開日: 2020/12/20
    ジャーナル オープンアクセス

    At present, various robotics competitions are being held, including the Tsukuba Challenge. The purpose of participating in a robotics competition is to confirm what can be done with the current technology and to demonstrate new research results. Participating teams often use open source software (OSS) for path planning and autonomous navigation. OSS is advantageous in facilitating participation in robotics competitions. However, applying it to a new robot is difficult when a new research does not involve OSS. In addition, robot systems do not consist only of OSS, and the burden of developing and maintaining other systems is significant. To solve the above problems, a software platform that allows for the addition of research achievements of individual robots is desired. With such a platform, research elements that have already been developed can be shared without the need to develop a new system. This makes it easier to maintain and manage the system and increase its sustainability.

Special Issue on Activity of Research Center-The University of Tokyo:Corporate Sponsored Research Program Construction System Management forInnovation
  • Keiji Nagatani, Atsushi Yamashita, Kazumasa Ozawa
    原稿種別: Institute Overview
    2020 年 32 巻 6 号 p. 1230-1232
    発行日: 2020/12/20
    公開日: 2020/12/20
    ジャーナル オープンアクセス

    In October 2018, a Corporate Sponsored Research Program, called “Construction System Management for Innovation,” was established at the School of Engineering, The University of Tokyo. The purposes of this program are (1) to research and develop a system to realize “i-Construction,” which can improve productivity in the construction sites by utilizing technology, and (2) to develop professionals who practice this system. This article provides a brief preface on the policies, research themes, seminars, and future targets of the program.

  • Tatsuki Nagano, Ryosuke Yajima, Shunsuke Hamasaki, Keiji Nagatani, Ale ...
    原稿種別: Paper
    2020 年 32 巻 6 号 p. 1233-1243
    発行日: 2020/12/20
    公開日: 2020/12/20
    ジャーナル オープンアクセス

    In this paper, we propose a visualization system for the teleoperation of excavation works using a hydraulic excavator. An arbitrary viewpoint visualization system is a visualization system that enables teleoperators to observe the environment around a machine by combining multiple camera images. However, when applied to machines with arms (such as hydraulic excavators), a part of the field of view is shielded by the image of the excavator’s arm; hence, an occlusion occurs behind the arm. Furthermore, it is difficult for teleoperators to understand the three-dimensional (3D) condition of the excavating point because the current system approximates the surrounding environment with a predetermined shape. To solve these problems, we propose two methods: (1) a method to reduce the occluded region and expand the field of view, and (2) a method to measure and integrate the 3D information of the excavating point to the image. In addition, we conduct experiments using a real hydraulic excavator, and we demonstrate that an image with sufficient accuracy can be presented in real-time.

  • Pang-jo Chun, Ji Dang, Shunsuke Hamasaki, Ryosuke Yajima, Toshihiro Ka ...
    原稿種別: Paper
    2020 年 32 巻 6 号 p. 1244-1258
    発行日: 2020/12/20
    公開日: 2020/12/20
    ジャーナル オープンアクセス

    In recent years, aging of bridges has become a growing concern, and the danger of bridge collapse is increasing. To appropriately maintain bridges, it is necessary to perform inspections to accurately understand their current state. Until now, bridge inspections have involved a visual inspection in which inspection personnel come close to the bridges to perform inspection and hammering tests to investigate abnormal noises by hammering the bridges with an inspection hammer. Meanwhile, as there are a large number of bridges (for example, 730,000 bridges in Japan), and many of these are constructed at elevated spots; the issue is that the visual inspections are laborious and require huge cost. Another issue is the wide disparity in the quality of visual inspections due to the experience, knowledge, and competence of inspectors. Accordingly, the authors are trying to resolve or ameliorate these issues using unmanned aerial vehicle (UAV) technology, artificial intelligence (AI) technology, and telecommunications technology. This is discussed first in this paper. Next, the authors discuss the future prospects of bridge inspection using robot technology such as a 3-D model of bridges. The goal of this paper is to show the areas in which deployment of the UAV, robots, telecommunications, and AI is beneficial and the requirements of these technologies.

Regular Papers
  • Rongmin Zhang, Shasha Zhou
    原稿種別: Paper
    2020 年 32 巻 6 号 p. 1259-1267
    発行日: 2020/12/20
    公開日: 2020/12/20
    ジャーナル オープンアクセス

    This paper investigated hydrodynamic performance of the Ka4-70+No.19A ducted propeller astern of a vectored underwater robot at diverse deflection angles. Employing SST k-ω turbulence model combined with moving reference frame technique, numerical computation of the ducted propeller in a fully developed turbulence behind hull was carried out. The validity of the model was verified by comparing the numerical results of open water performance and the experimental values. The hydrodynamic performance of the ducted propeller was worked out and discussed in detail. The wake flow and thrust deduction fraction corresponding to different deflection angles were analyzed. Results show that the ducted propeller generates more thrust and requires more torque at lager deflection angle. In addition, the thrust deduction fraction increases with the increase of the deflection angle.

  • Nobuto Hirakoso, Ryoichiro Tamura, Yoichi Shigematsu
    原稿種別: Paper
    2020 年 32 巻 6 号 p. 1268-1278
    発行日: 2020/12/20
    公開日: 2020/12/20
    ジャーナル オープンアクセス

    In this paper, an autonomous aerial robot system with a multirotor mechanism is described, where the robot has an arbitrary configuration of rotors. To construct a navigation system for the arbitrary 3-axis direction, the static constraint conditions are treated as dynamic equilibrium, and the analytical solution of this formulation is obtained with regard to two terms, namely attitude and height control. Moreover, the obtained analytical solution is implemented as a proportional-integral-derivative controller such that the navigation control system is fused with the attitude and height control systems optimally. To confirm the efficacy of this constructed navigation control system, navigation experiments with arbitrary azimuth direction and height are executed for a manufactured trial quadrotor system as an aerial robot and the results are estimated.

  • Takuya Fujinaga, Shinsuke Yasukawa, Kazuo Ishii
    原稿種別: Paper
    2020 年 32 巻 6 号 p. 1279-1291
    発行日: 2020/12/20
    公開日: 2020/12/20
    ジャーナル オープンアクセス

    To realize smart agriculture, we engaged in its systematization, from monitoring to harvesting tomato fruits using robots. In this paper, we explain a method of generating a map of the tomato growth states to monitor the various stages of tomato fruits and decide a harvesting strategy for the robots. The tomato growth state map visualizes the relationship between the maturity stage, harvest time, and yield. We propose a generation method of the tomato growth state map, a recognition method of tomato fruits, and an estimation method of the growth states (maturity stages and harvest times). For tomato fruit recognition, we demonstrate that a simple machine learning method using a limited learning dataset and the optical properties of tomato fruits on infrared images exceeds more complex convolutional neural network, although the results depend on how the training dataset is created. For the estimation of the growth states, we conducted a survey of experienced farmers to quantify the maturity stages into six classifications and harvest times into three terms. The growth states were estimated based on the survey results. To verify the tomato growth state map, we conducted experiments in an actual tomato greenhouse and herein report the results.

  • Kenta Suzuki, Kuniaki Kawabata
    原稿種別: Development Report
    2020 年 32 巻 6 号 p. 1292-1300
    発行日: 2020/12/20
    公開日: 2020/12/20
    ジャーナル オープンアクセス

    This paper describes the development of a robot simulator for remote decommissioning tasks using remotely operated robots at the Fukushima Daiichi Nuclear Power Station of the Tokyo Electric Power Company Holdings. The robot simulator was developed to provide a remote operation training environment to ensure operator proficiency. The developed simulator allows for the calculation of physical aspects, such as the hydrodynamics of a remotely operated vehicle and the aerodynamics of an unmanned aerial vehicle. A disturbed camera view presented to an operator can be generated by setting parameters such as transparency, color, distortion, and noise. We implemented a communication failure emulator on the simulator in addition to functionalities for calculating the integral dose and generating the gamma camera image. We discuss the functional requirements and introduce the implemented functionalities. The simulator was built using the developed functions and can be executed integrally.

feedback
Top