This paper presents a realization of a four-legged, four-wheel-drive hydraulic powered rover that can traverse irregular terrain such as agricultural or forest land. The fabricated rover is 331[kg] in weight, with a base of 1,200[mm] length, 700[mm] width and standard height at 1,000[mm] on nominal configuration. The rover can control the whole-body joint torque using the pressure servo on hydraulic actuators, thereby optimally manipulate the ground reaction force and traction of the wheel according to the rover states. In this paper, we describe the design concept of the rover, followed by the details of the realized mechanical system, control system, and the control methods. We validate the proposed system by demonstrating the experimental results on compliant balancing on a moving seesaw and traveling on a slope.
Soft robots are expected to replace human tasks and assist people in their daily lives. In order to control robotic tasks with flexible mechanisms and materials, alternative methods to conventional rigid robot control are needed. In this paper, we propose a robot kinematics and control method applicable to a robot arm with flexible joints. In order to avoid using exact rotation angles of the joints with rigid rotation axis, we derive a kinematic method that reduces the dependence on the joint variables. A prototype 2 link robot arm is built with 4 flexible joints (shoulder 3 and elbow 1). Using the prototype arm, we tested the motion control with the derived kinematics. We also tested to execute unplugging electric outlet task in a manner that mimic human motion.
This letter presents a localization method that fuses optimization-based monocular visual localization using Bayesian filter. Almost all the visual localization methods are based on the optimization, e.g., bundle adjustment. The optimization-based methods are generally weak to noises. The Bayesian-filter-based methods are suitable for autonomous navigation because these are robust to noises and can smoothly estimate a trajectory. To fuse an estimate by the optimization-based method using Bayesian filter, it is necessary to determine uncertainty of the estimate; however, the optimization-based methods do not provide the uncertainty. The presented method determines uncertainty of the estimate while respecting to the Hessian matrix used in the optimization process. The estimate is fused with Odometry using Kalman filter. We validate the uncertainty and performance of the Bayesian-filter-based fusion. Results show that the uncertainty appropriately changes according to visual measurement conditions and smooth trajectory can be estimated. Additionally, we conduct autonomous flight with a quadcopter and confirm that the autonomous flight can be achieved with the localization. The software used in this work is publicly available.
This paper presents a method to select behavior modes autonomously for a planetary rover. In the conventional methods, the behavior modes of a rover are selected by operators. Once the environment changes, however, it takes a long time to drive to the destination, because the intervention by operators is needed. Therefore, autonomous behavior mode selection is required to improve the exploration efficiency. The key idea of the proposed method is to understand the environmental map by deep learning so that a rover can select appropriate behavior mode according to the environment. The simulation study has been conducted to show the validity of the proposed method. The proposed method successfully demonstrates the capability to select behavior modes and to improve the traverse efficiency.
We have thus far presented a brain-machine-interface, BMI, for users of personal mobility robots. However, once the BMI predicts a wrong control command, both the user and robot face the danger of collision accidents. In this paper, therefore, we propose a fail-safe controller based on CNN (Convolutional Neural Network) for assisting users of personal mobility robots with the BMI. In addition to the control command, a depth map for the input image is simultaneously predicted by the fail-safe controller through multi-task learning. For this purpose, CAE (Convolutional Autoencoder) and DCGAN (Deep Convolutional Generative Adversarial Networks) are used instead of the CNN. In the experiments, we show that the fail-safe performance is increased by predicting the depth map for the input image. Finally, the fail-safe controller based on the DCGAN yields the best performance.
In recent construction and civil engineering industry, there is a high demand for automation of soil moving work by collaboration between a backhoe and a dump truck. For the automation, it is necessary to obtain information of the dump truck such as its pose against the backhoe and soil condition in the truck bed. With regards to the latter, shape and volume of the loaded soil in the bed is quite important for the machineries to work efficiently. In this paper, our methodology to measure shape and volume of the soil loaded in a truck bed is reported promptly. In particular, to solve occlusion are a which cannot be measured directly at the far side of loaded soil, the volume of soil in the truck bed is estimated by interpolating the occluded shape according to a time-series point cloud while considering angle of repose.
In robotic assembly, robotic hands need to grasp parts accurately for achieving assembly tasks. Self-alignment of parts by utilizing form closure grasps is one solution. Depending on hand mechanisms or part shapes, the target grasps may be 2nd-order form closure grasps. However, the feasibility of alignment toward 2nd-order form closure grasps is hardly ever studied. Assuming alignment of parts by a robotic hand with two circular fingers, in this paper, we theoretically and experimentally study the feasibility of such alignment of two-dimensional parts considering the generality of part shapes.