Journal of the Robotics Society of Japan
Online ISSN : 1884-7145
Print ISSN : 0289-1824
ISSN-L : 0289-1824
Volume 39, Issue 5
Displaying 1-18 of 18 articles from this issue
On special issue
Perspective
Review
Paper
  • Shinya Kobayashi, Ryota Yatagai, Tadashi Egami
    2021 Volume 39 Issue 5 Pages 445-454
    Published: 2021
    Released on J-STAGE: June 22, 2021
    JOURNAL FREE ACCESS

    We developed a novel robot hand using an iris mechanism similar to that used as a lens aperture in cameras. An iris mechanism is useful because it opened and closed from all directions, but has not been considered practical for use as a robot hand because the iris blades are thin and overlap one another. In this research, we developed a structure using thick blades that do not overlap, thereby allowing it to be use as a robot hand. The resulting mechanism is driven by a single actuator. Because objects can be grasped from all directions, iris robot hands can grasp any object placed in the hand opening. We developed two types of iris robot hands: a sliding type driven by a slider and a swinging type driven by gear rotations. We then analyze the developed iris robot hand to verify its characteristics and effectiveness.

    Download PDF (4086K)
  • Takaaki Sato, Masato Mizukami, Shoji Mochizuki, Masayuki Tsuda
    2021 Volume 39 Issue 5 Pages 455-458
    Published: 2021
    Released on J-STAGE: June 22, 2021
    JOURNAL FREE ACCESS

    In recent years, the aging of underground infrastructures has made it necessary to inspect them efficiently. In order to improve the inspection efficiency of outdoor infrastructure facilities, an omni-directional mobile robot that can move freely in all directions has been investigated. We have measured the acceleration time response of omni-wheels, so as to investigate the effect of the vibration generated by the omni-directional movement mechanism. The acceleration characteristics due to the difference between the direction of movement and the speed of the omni-wheel have been evaluated. It was confirmed that periodic vibration occurred by each moving direction of the Omni wheel. When a moving direction was decided on run speed, we confirmed that the vibration input into all direction movement mechanism could be estimated

    Download PDF (1463K)
  • Kimiko Motonaka, Seiji Miyoshi
    2021 Volume 39 Issue 5 Pages 459-462
    Published: 2021
    Released on J-STAGE: June 22, 2021
    JOURNAL FREE ACCESS

    D. Zhou et al. proposed an algorithm based on the buffered Voronoi cells (BVC) for quadrotors to reach each given target position without mutual collision. However, in the method, it is assumed that all quadrotor positions are known and the control inputs for all quadrotors are calculated in a single computer. In this paper, we describe the method to compute the Voronoi diagram using only the obstacle information around a quadrotor in an unknown environment and to reach a given target position. The usability of this method is confirmed by simulation.

    Download PDF (1706K)
  • —Outline of Hardware and Strategy for Imitating Shape of Human Hand—
    Hidetoshi Ikeda, Yudai Yamaguchi, Ryo Ueda, Takumi Saeki, Masahiro Sak ...
    2021 Volume 39 Issue 5 Pages 463-466
    Published: 2021
    Released on J-STAGE: June 22, 2021
    JOURNAL FREE ACCESS
    Supplementary material

    This paper proposes a six-degree-of-freedom robot hand with a foldable planar mechanism. The robot hand, which has two plates that serve as fingers, imitates the shape of a human hand to allow it to manipulate various objects. We show the mechanism and strategy used by the robot hand to handle various objects. The results of experiments of pinching and grasping an object show the effectiveness of the proposed system.

    Download PDF (2188K)
  • Toshiki Fujishiro, Tadayoshi Aoyama, Kazuki Hano, Masaki Takasu, Masar ...
    2021 Volume 39 Issue 5 Pages 467-470
    Published: 2021
    Released on J-STAGE: June 22, 2021
    JOURNAL FREE ACCESS
    Supplementary material

    This study proposes a micromanipulation system that improves depth visibility through real-time 3D image presentation. A calibration method to adjust relative position and orientation between a target object and micromanipulators is implemented to the system; then the system presents reconstructed 3D images of a target and micromanipulators. The effectiveness of the proposed system is demonstrated through manipulation of an embryo.

    Download PDF (1308K)
  • Ryota Yamamura, Tsuyoshi Tasaki
    2021 Volume 39 Issue 5 Pages 471-474
    Published: 2021
    Released on J-STAGE: June 22, 2021
    JOURNAL FREE ACCESS

    For the purpose of Self-Localization by 3D-maps and camera images, we have worked on the realignment task that has bigger error than conventional DNN that matches between 3D-maps and camera images. To resolve this task, we payed attention the registration characteristics of CalibNet that realign them focusing on the axis of height of the vehicle. We devised the error given to training data for CalibNet, and we can train the network that can registration focusing on the side and depth axes of the vehicle. Combining with CalibNet and our network and doing two steps correction, we achieved that improving error by 1.4 times in average of each axes of the vehicle when the center error of 3D maps and camera images is max 55[cm].

    Download PDF (1523K)
  • Haruto Saito, Satoshi Hoshino
    2021 Volume 39 Issue 5 Pages 475-478
    Published: 2021
    Released on J-STAGE: June 22, 2021
    JOURNAL FREE ACCESS

    In this paper, we focus on a robotic patrolling system. Robots are required to detect changes, such as lost and stolen properties, in the patrolling environment. For this mission, an omnidirectional camera is applied to a mobile robot. In order for the patrolling robot to detect such properties, we propose an image subtraction method. In the experiments, we show that the robot based on the proposed method is able to detect not only a small object, e.g., iPhone, but also a fire extinguisher removed from the original location.

    Download PDF (868K)
  • Yusuke Yoshida, Satoshi Hoshino
    2021 Volume 39 Issue 5 Pages 479-482
    Published: 2021
    Released on J-STAGE: June 22, 2021
    JOURNAL FREE ACCESS

    For autonomous navigation of mobile robots, obstacle avoidance in consideration of the destination is an essential capability. In this paper, we focus on a mobile robot equipped with RGB-D camera and LiDAR sensors, and propose an end-to-end motion planner based on a convolutional neural network, CNN, through imitation learning. In order for the robot to avoid various obstacles, we generate novel object detection images from the original RGB images. The object detection and depth images are then fed as inputs to the CNN. In a fully connected layer, moreover, a direction angle to the destination is inputted. In the navigation experiments, we show that the robot based on the proposed motion planner is able to move toward the goal destination while avoiding collisions with various obstacles.

    Download PDF (1033K)
feedback
Top