Journal of Robotics and Mechatronics
Online ISSN : 1883-8049
Print ISSN : 0915-3942
ISSN-L : 0915-3942
Volume 30, Issue 2
Displaying 1-17 of 17 articles from this issue
Special Issue on Advanced Robotics in Agriculture, Forestry and Fisheries
  • Kazuo Ishii, Eiji Hayashi, Norhisam Bin Misron, Blair Thornton
    Article type: Editorial
    2018 Volume 30 Issue 2 Pages 163-164
    Published: April 20, 2018
    Released on J-STAGE: October 20, 2018
    JOURNAL OPEN ACCESS

    The importance of primary industries, agriculture, forestry and fisheries, is obvious and needless to mention, however, the reduction of the working population and the aging problem make the situation of primary industry more sever. To compensate for the issues, the advanced technology in robotics has attracted attentions and expected the contributions in terms of productivity, cost effectiveness, pesticide-less, monitoring of the growth and harvesting, etc. Recently, robotic technologies are gradually being used in primary industry and their application area will expand more in the near future. This special issue’s objectives include collecting recent advances, automation, mechanization, research trends and their applications in agriculture, forestry and fisheries to promote a deeper understanding of major conceptual and technical challenges and facilitate spreading of recent breakthroughs in primary industries, and contribute to the enhancement of the quality of agricultural, forestry and fisheries robots by introducing the state-of-the-art in sensing, mobility, manipulation and related technologies.

    In this special issue, twelve papers are included. The first paper by Noguchi is the survey paper of the state-of-the-art in the agricultural vehicle type robots and discusses the future scope of agriculture with robotics. The next three papers are on tomato-monitoring system, and Fukui et al. propose a tomato fruit volume estimation method using saliency-based image processing and point cloud and clustering technology, Yoshida et al. do the cutting point identification for tomato-harvesting using a RGBD sensor and evaluate in the real farm experiments, and Fujinaga et al. present an image mosaicking method of tomato yard based on the infrared images and color images of tomato-clusters in the large green house. The fifth paper by Sori et al. reports a paddy weeding robot in wet-rice field to realize the pesticide-free produce of rice, and the sixth paper by Shigeta et al. is about an image processing system to measure cow’s BCS (Body Condition Score) automatically before milking cows and analyzes the two months data by CNN (Convolutional Neural Network). The seventh paper by Inoue et al. reports on an upper-limb power assist robot with a single actuator to reduce the weight and cost. The assist machine supports the shoulder and elbow movements for viticulture operations and upper-limb holding for load transport tasks. In the next paper, Tominaga et al. show an autonomous robotic system to move between the trees without damaging them and to cut the weeds in the forest for the forest industry. The last four papers are for the fishery industry, and Komeyama et al. propose a methods for monitoring the size of fish, red sea bream (RSB) aquaculture by developing a stereo vision system to avoid the risks of physical injury and mental stress to the fish. Nishida et al. report on a hovering type underwater robot to measure seafloor for monitoring marine resources whose sensor can be replaced depending on missions as the open hardware system. Yasukawa et al. propose a vision system for an autonomous underwater robot with a benthos sampling function, especially, sampling-autonomous underwater vehicles (SAUVs) to achieve a new sampling mission. The last paper by Han et al. is for gait planning and simulation analysis of an amphibious quadruped robot in the field of fisheries and aquaculture.

    We hope that this special issue can contributes to find solutions in primary industries, agriculture, forestry and fisheries.

    Download PDF (200K)
  • Noboru Noguchi
    Article type: Review
    2018 Volume 30 Issue 2 Pages 165-172
    Published: April 20, 2018
    Released on J-STAGE: October 20, 2018
    JOURNAL OPEN ACCESS

    With the intensive application of techniques in global positioning, machine vision, image processing, sensor integration, and computing-based algorithms, vehicle automation is one of the most pragmatic branches of precision agriculture, and has evolved from a concept to be in existence worldwide. This paper addresses the application of robot vehicles in agriculture using new technologies.

    Download PDF (3735K)
  • Rui Fukui, Kenta Kawae, Shin’ichi Warisawa
    Article type: Development Report
    2018 Volume 30 Issue 2 Pages 173-179
    Published: April 20, 2018
    Released on J-STAGE: October 20, 2018
    JOURNAL OPEN ACCESS

    Recently, the promotion of the utilization of data mining in Japanese agriculture has become noteworthy. The purpose of such data mining is to transform the knowledge and know-how of experienced farmers into an explicit form. In particular, it is required for creating a tomato cultivation database to acquire the growth data of not only red mature tomatoes, but also green immature tomatoes. We are developing a robot to estimate the volume of a tomato that actively searches an appropriate measurement position. While patrolling a tomato bed, the robot first detects a tomato by using saliency-based image processing technology. When a tomato has been detected, a motion stereo camera installed on the robot generates a point cloud and a clustering process extracts the fruit region. A three-point-algorithm-based ellipse detector then estimates the width of the extracted fruit region. Finally, the estimation result is immediately evaluated using multiple indicators. This immediate evaluation process rejects unreliable data and suggests the correct position for re-measurement.

    Download PDF (2793K)
  • Takeshi Yoshida, Takanori Fukao, Takaomi Hasegawa
    Article type: Paper
    2018 Volume 30 Issue 2 Pages 180-186
    Published: April 20, 2018
    Released on J-STAGE: October 20, 2018
    JOURNAL OPEN ACCESS

    This paper proposes a fast method for detecting tomato peduncles by a harvesting robot. The main objective of this study is to develop automated harvesting with a robot. The harvesting robot is equipped with an RGB-D camera to detect peduncles, and an end effector to harvest tomatoes. It is necessary for robots to detect where to cut a plant for harvesting. The proposed method detects peduncles using a point cloud created by the RGB-D camera. Pre-processing is performed with voxelization in two resolutions to reduce the computational time needed to calculate the positional relationship between voxels. Finally, an energy function is defined based on three conditions of a peduncle, and this function is minimized to identify the cutting point on each peduncle. To experimentally demonstrate the effectiveness of our approach, a robot was used to identify the peduncles of target tomato plants and harvest the tomatoes at a real farm. Using the proposed method, the harvesting robot achieved peduncle detection of the tomatoes, and harvested tomatoes successfully by cutting the peduncles.

    Download PDF (3901K)
  • Takuya Fujinaga, Shinsuke Yasukawa, Binghe Li, Kazuo Ishii
    Article type: Paper
    2018 Volume 30 Issue 2 Pages 187-197
    Published: April 20, 2018
    Released on J-STAGE: October 20, 2018
    JOURNAL OPEN ACCESS

    Due to the aging and decreasing the number of workers in agriculture, the introduction of automation and precision is needed. Focusing on tomatoes, which is one of the major types of vegetables, we are engaged in the research and development of a robot that can harvest the tomatoes and manage the growth state of tomatoes. For the robot to automatically harvest tomatoes, it must be able to automatically detect harvestable tomatoes positions, and plan the harvesting motions. Furthermore, it is necessary to grasp the positions and maturity of tomatoes in the greenhouse, and to estimate their yield and harvesting period so that the robot and workers can manage the tomatoes. The purpose of this study is to generate a tomato growth state map of a cultivation lane, which consists of a row of tomatoes, aimed at achieving the automatic harvesting and the management of tomatoes in a tomato greenhouse equipped with production facilities. Information such as the positions and maturity of the tomatoes is attached to the map. As the first stage, this paper proposes a method of generating a greenhouse map (a wide-area mosaic image of a tomato cultivation lane). Using the infrared image eases a correspondence point problem of feature points when the mosaic image is generated. Distance information is used to eliminate the cultivation lane behind the targeted one as well as the background scenery, allowing the robot to focus on only those tomatoes in the targeted cultivation lane. To verify the validity of the proposed method, 70 images captured in a greenhouse were used to generate a single mosaic image from which tomatoes were detected by visual inspection.

    Download PDF (2479K)
  • Hitoshi Sori, Hiroyuki Inoue, Hiroyuki Hatta, Yasuhiro Ando
    Article type: Paper
    2018 Volume 30 Issue 2 Pages 198-205
    Published: April 20, 2018
    Released on J-STAGE: October 20, 2018
    JOURNAL OPEN ACCESS

    In recent years, wet rice farming that does not use chemical herbicides has come in demand owing to the diversified consumer needs, preference for pesticide-free produce, and need to reduce the environmental load. In this paper, we propose a “weeding robot” that can navigate autonomously while weeding a paddy field. The weeding robot removes the weeds by churning up the soil and inhibits the growth of the weeds by blocking-off sunlight. It has two wheels, whose rotational speed is controlled by pulse width modulation (PWM) signals. Moreover, it has capacitive touch sensors to detect the rice plants and an azimuth sensor used when turning. To demonstrate its effect in wet rice culture, we conduct a navigation experiment using the proposed weeding robot in two types of paddy field: conventional and sparse planting. The experiment results demonstrate that the proposed weeding robot is effective in its herbicidal effect, promoting the rice seedling growth and increasing the crop yield.

    Download PDF (3596K)
  • Masahiro Shigeta, Reiichirou Ike, Hiroshi Takemura, Hayato Ohwada
    Article type: Paper
    2018 Volume 30 Issue 2 Pages 206-213
    Published: April 20, 2018
    Released on J-STAGE: October 20, 2018
    JOURNAL OPEN ACCESS

    According to the Ministry of Agriculture, Forestry, and Fisheries of Japan, the number of rearing houses has been decreasing in Japan in recent years due to lower business volumes. However, the number of rearing animals per house has been increasing, and in such situations, management of a herd of cows becomes very important. However, although systems such as a milking robot and an automatic feeding machine have been designed and implemented, an automatic measurement system to evaluate the body condition score (BCS), which is used for nutrition management of dairy cows, has not yet become popular. There have been many prior studies on this subject; however, none of them have succeeded in creating an inexpensive and highly accurate system that is capable of capturing images over a long period of time. The purpose of this study was to develop a system that continuously and automatically captures images of cows using a camera over a long period of time and to carry out a highly accurate determination of BCS. By attaching a three-dimensional camera to a sorting gate of a milking robot, we have developed a system that automatically captures images of cows as they pass through the gate. Data obtained from the captured images are automatically accumulated in a server. Thus, we successfully obtained a huge amount of data within two months. All parts of the image except the dairy cows were removed from the obtained three-dimensional data, and the three-dimensional data were then converted into two-dimensional images. Subsequently, the two-dimensional images were analyzed using a convolutional neural network, resulting in 89.1% of the answers being correct. When the acceptable error was ±0.25, the rate of correct answers is 94.6%, and the average absolute error, which is the average of the difference between the predicted BCS and the actual BCS for all test data, is 0.05. Although we used images that do not cover the entire body of the cow because of the position of the camera and the variation in captured parts (depending on images), we have successfully achieved a high accuracy. This promises that even higher accuracy can be achieved by automating the flow of data and carrying out the appropriate treatment of data to determine BCS.

    Download PDF (1134K)
  • Hiroyuki Inoue, Toshiro Noritsugu
    Article type: Paper
    2018 Volume 30 Issue 2 Pages 214-222
    Published: April 20, 2018
    Released on J-STAGE: October 20, 2018
    JOURNAL OPEN ACCESS

    This paper proposes an upper-limb power assist machine driven by a single actuator to reduce the weight and cost. This assist machine is used to support the shoulder and elbow movements for viticulture operations, and upper-limb holding for load transport tasks. This assist machine consists of an arm part and a mounting part. The arm part is composed of a parallel link mechanism, which is driven by an actuator and a trapezoidal feed screw. To realize a natural upper-limb motion, the length of the arm part was designed based on the human upper-limb motion. The assist machine is controlled based on the user’s intention by applying bend sensors attached to the input device. By measuring the electromyography signal of five muscles, the effectiveness of the proposed upper-limb power assist machine was verified experimentally.

    Download PDF (2033K)
  • Abbe Mowshowitz, Ayumu Tominaga, Eiji Hayashi
    Article type: Development Report
    2018 Volume 30 Issue 2 Pages 223-230
    Published: April 20, 2018
    Released on J-STAGE: October 20, 2018
    JOURNAL OPEN ACCESS

    This paper addresses the problem of using a mobile, autonomous robot to manage a forest whose trees are destined for eventual harvesting. “Manage” in this context means periodical weeding between all the trees in the forest. We have constructed a robotic system enabling an autonomous robot to move between the trees without damaging them and to cut the weeds as it traverses the forest. This was accomplished by 1) computing a trajectory for the robot in advance of its entrance into the forest, and 2) developing a program and equipping the robot with the instruments needed to follow the trajectory. Computation of a trajectory in a forest is facilitated by treating the trees as vertices in a graph. Current, laser-based instruments make it possible to identify individual trees and compute distances between them. With this information a forest can be represented as a weighted graph. This graph can then be modified systematically in a way that allows for computing a Hamiltonian circuit that passes between each pair of trees. This representation is an instance of the well known Travelling Salesman Problem. The theory was put into practice in an experimental forest located at the Kyushu Institute of Technology. Our robot “SOMA,” built on an ATV platform, was able to follow a part of the trajectory computed for this small forest, thus demonstrating the feasibility of forest maintenance by an autonomous, labor saving robot.

    Download PDF (1260K)
  • Kazuyoshi Komeyama, Tatsuya Tanaka, Takeharu Yamaguchi, Shigeru Asaumi ...
    Article type: Paper
    2018 Volume 30 Issue 2 Pages 231-237
    Published: April 20, 2018
    Released on J-STAGE: October 20, 2018
    JOURNAL OPEN ACCESS

    For aquaculture management, aquaculture farmers require a new, inexpensive device that can obtain the size of a fish without touching them, replacing the conventional spoon-net sampling method. Conventional sampling involves the risks of physical injury and mental stress to the fish, which may affect their growth rate and mortality. Therefore, we developed methods for monitoring the size of fish, considering red sea bream (RSB) aquaculture, using commercially available cameras. This study evaluates the sample size using the estimated mean fork length value in a cage, and its value is approximately 20 samples with a 2% error rate for a fork length of greater than 30 cm. We measured the fish fork length under water in the cage using both stereo vision and net-sampling methods simultaneously. The examination demonstrated that for RSB aquaculture, the estimated values of fork length from the two methods have no statistical difference. This result implies that our stereo vision system can be effectively applied to monitor RSB growth.

    Download PDF (859K)
  • Yuya Nishida, Takashi Sonoda, Shinsuke Yasukawa, Kazunori Nagano, Mamo ...
    Article type: Paper
    2018 Volume 30 Issue 2 Pages 238-247
    Published: April 20, 2018
    Released on J-STAGE: October 20, 2018
    JOURNAL OPEN ACCESS

    A hovering-type autonomous underwater vehicle (AUV) capable of cruising at low altitudes and observing the seafloor using only mounted sensors and payloads was developed for sea-creature survey. The AUV has a local area network (LAN) interface for an additional payload that can acquire navigation data from the AUV and transmit the target value to the AUV. In the handling process of the state flow of an AUV, additional payloads can control the AUV position using the transmitted target value without checking the AUV condition. In the handling process of the state flow of an AUV, additional payloads can control the AUV position using the transmitted target value without checking the AUV condition. In this research, water tank tests and sea trials were performed using an AUV equipped with a visual tracking system developed in other laboratories. The experimental results proved that additional payload can control the AUV position with a standard deviation of 0.1 m.

    Download PDF (2817K)
  • Shinsuke Yasukawa, Jonghyun Ahn, Yuya Nishida, Takashi Sonoda, Kazuo I ...
    Article type: Paper
    2018 Volume 30 Issue 2 Pages 248-256
    Published: April 20, 2018
    Released on J-STAGE: October 20, 2018
    JOURNAL OPEN ACCESS

    We developed a vision system for an autonomous underwater robot with a benthos sampling function, specifically sampling-autonomous underwater vehicle (AUV). The sampling-AUV includes the following five modes: preparation mode (PM), observation mode (OM), return mode (RM), tracking mode (TM), and sampling mode (SM). To accomplish the mission objective, the proposed vision system comprises software modules for image acquisition, image enhancement, object detection, image selection, and object tracking. The camera in the proposed system acquires images in intervals of five seconds during OM and RM, and in intervals of one second during TM. The system completes all processing stages in the time required for image acquisition by employing high-speed algorithms. We verified the effective operation of the proposed system in a pool.

    Download PDF (3928K)
  • Shuo Han, Yuan Chen, Guangying Ma, Jinshan Zhang, Runchen Liu
    Article type: Paper
    2018 Volume 30 Issue 2 Pages 257-264
    Published: April 20, 2018
    Released on J-STAGE: October 20, 2018
    JOURNAL OPEN ACCESS

    In order to allow quadruped robots to adapt to the complex working environment in the field of fisheries and aquaculture, a new type of quadruped robot with linear and rotary driving is proposed, and the kinematic inverse solution of the leg of the quadruped robot is deduced. For achieving quadruped robot smooth walking, the straight gait of the quadruped robot is planned according to the stability margin principle of motion, so that the stability margin of the machine is 20 mm when three legs supporting it. The planning gait is simulated by ADAMS software, the kinematics and dynamics analysis of the four main driving mechanisms of the robot leg were carried out, and the feasibility of using the STEP5 driving function to execute the planning gait in the quadruped robot was verified. The theoretical and simulation curve analysis results show that, the quadruped robot according to the planned gait can complete the cycle and have a stable walking. The results of this study can provide a reference for the practical application of the new amphibious quadruped robot in the fields of complex and uneven ground in the field of fisheries and aquaculture to realize exploration, fishing and transportation.

    Download PDF (2770K)
Regular Papers
  • Ndivhuwo Makondo, Michihisa Hiratsuka, Benjamin Rosman, Osamu Hasegawa
    Article type: Paper
    2018 Volume 30 Issue 2 Pages 265-281
    Published: April 20, 2018
    Released on J-STAGE: October 20, 2018
    JOURNAL OPEN ACCESS

    The number and variety of robots active in real-world environments are growing, as well as the skills they are expected to acquire, and to this end we present an approach for non-robotics-expert users to be able to easily teach a skill to a robot with potentially different, but unknown, kinematics from humans. This paper proposes a method that enables robots with unknown kinematics to learn skills from demonstrations. Our proposed method requires a motion trajectory obtained from human demonstrations via a vision-based system, which is then projected onto a corresponding human skeletal model. The kinematics mapping between the robot and the human model is learned by employing Local Procrustes Analysis, a manifold alignment technique which enables the transfer of the demonstrated trajectory from the human model to the robot. Finally, the transferred trajectory is encoded onto a parameterized motion skill, using Dynamic Movement Primitives, allowing it to be generalized to different situations. Experiments in simulation on the PR2 and Meka robots show that our method is able to correctly imitate various skills demonstrated by a human, and an analysis of the transfer of the acquired skills between the two robots is provided.

    Download PDF (2400K)
  • Felix Jimenez, Tomohiro Yoshikawa, Takeshi Furuhashi, Masayoshi Kanoh
    Article type: Paper
    2018 Volume 30 Issue 2 Pages 282-291
    Published: April 20, 2018
    Released on J-STAGE: October 20, 2018
    JOURNAL OPEN ACCESS

    In recent years, educational-support robots, which are designed to aid in learning, have received significant attention. However, learners tend to lose interest in these robots over time. To solve this problem, researchers studying human-robot interactions have developed models of emotional expression by which robots autonomously express emotions. We hypothesize that if an educational-support robot uses an emotion-expression model alone and expresses emotions without considering the learner, then the problem of losing interest in the robot will arise once again. To facilitate collaborative learning with a robot, it may be effective to program the robot to sympathize with the learner and express the same emotions as them. In this study, we propose a sympathy-expression method for use in educational-support robots to enable them to sympathize with learners. Further, the effects of the proposed sympathy-expression method on collaborative learning among junior high school students and robots are investigated.

    Download PDF (2759K)
  • Toshiki Matsui, Satoshi Murata, Takashi Honda
    Article type: Paper
    2018 Volume 30 Issue 2 Pages 292-299
    Published: April 20, 2018
    Released on J-STAGE: October 20, 2018
    JOURNAL OPEN ACCESS

    This article proposes two kinds of magnetic rotary actuators applicable to a next-generation capsule endoscope that can conduct a small intestinal biopsy. One actuator is for an anchoring mechanism that can stop the capsule in a specific place against the peristaltic movement of the small intestine, and press a biopsy instrument on a lesion. The other actuator is for a biopsy mechanism that can project a circular blade while rotating, and obtain a tissue sample. Both actuators have the same basic structure, which is composed of a bolt with a permanent magnet and a nut, and can be driven by a rotating magnetic field. Because they are arranged orthogonally to each other in the capsule, they can be individually operated by the corresponding rotating magnetic field. Given the results of an operational test similar to an actual environment in a porcine small intestine, these actuators could successfully perform the desired operation.

    Download PDF (3253K)
  • Naoto Mizutani, Hirokazu Matsui, Ken’ichi Yano, Toshimichi Takahashi
    Article type: Paper
    2018 Volume 30 Issue 2 Pages 300-310
    Published: April 20, 2018
    Released on J-STAGE: October 20, 2018
    JOURNAL OPEN ACCESS

    Robotic drivers are used in vehicle performance tests, such as those for testing fuel consumption and exhaust gas. For driving test cycles, a test vehicle is driven on a dynamometer with a defined set of time and speed. These cycles have a tolerance band. Thus, to accurately compare the fuel consumption of various vehicles, it is necessary to run within the tolerance band and approach the target speed as close as possible by better control performance. However, because a vehicle has complex dynamic characteristics, it is difficult to improve the control performance especially in CVTs (continuously variable transmissions). In this paper, we improved the speed control performance through the derivation of a target speed waveform and design of a control system by considering the dynamic characteristics of CVT. First, we derived the target speed wave form that can remove the deviation from the tolerance band. Thereafter, we designed a control system to improve the speed control performance. The speed control performance of the proposed control systems were confirmed through vehicle running tests with the robotic driver.

    Download PDF (3741K)
feedback
Top