-
Shun NISHIZAWA, Toshiyuki SATOH, Naoki SAITO, Jun-ya NAGASE, Norihiko ...
Session ID: 1A1-E05
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We propose a design method of an Unknown-Input Estimator (UIE) using Artificial Bee Colony (ABC) algorithm considering the stability of the initial colony. In the previous study, we used an objective function that evaluates the step response of the closed-loop system, which assumes that not every closed-loop system is unstable at the initialization stage. Unfortunately, this is not always true, so we derive a condition on the estimator gain that ensures the closed-loop stability when the plant is a first-order system. Using the condition, we are able to generate stable closed-loop systems at the initialization stage.
View full abstract
-
Junpei KASAHARA, Toshiyuki SATOH, Naoki SAITO, Jun-ya NAGASE, Norihiko ...
Session ID: 1A1-E06
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We attempt to make use of the Unknown-Input-Estimator (UIE) proposed by Tang et al. by utilizing an existing state observer. In the original UIE, a state obesrver exists in the structure. However, if a control system where the UIE is utilized is designed on the basis of the the state feedback, a state observer is inherently included, which seems to be redundant. Here, we examine whether the state observer in the UIE can be removed and plant states estimated by the state observer in the control system can be utilized for the computation in the UIE. The experimental results show that the state observer in the control system can be substituted for that in the UIE.
View full abstract
-
Tatsuya ISHIGURO, Hiroyuki OKUDA, Tatsuya SUZUKI
Session ID: 1A1-E07
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This paper proposes a model predictive trajectory planning method with collision avoidance in narrow parking spaces such as the case that parking would not be completed by one motion of cutting the wheel. Rectangle expression is used for all objects, and obstacle avoidance constraint is defined by a concise inequality with a coordinate transformation from rectangle to circle shape for each object. Switching of moving forward and backward is conducted regardless of the position and the number of times by tuning weights of the cost function and mechanical constraints for a vehicle. Verification is conducted for the performance of strict obstacle avoidance in a narrow environment and calculation time for optimization using the proposed method by numerical simulations.
View full abstract
-
Nobuaki ITO, Hiroyuki OKUDA, Shinkichi INAGAKI, Tatsuya SUZUKI
Session ID: 1A1-E08
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
A towing robot, or a tractor-trailer mobile robot(TTMR) has a high loading capacity and locomotion then has been studied for many years. However, it is known that the control of the TTMR automatically avoiding obstacles is difficult due to its underactuated system and nonholonomic constraints. To drive the TTMR in the narrow path is challenging but it is needed to transport in the factories and warehouses. In this study, the authors tried to make the TTMR control and avoid obstacles even in the narrow path, considering the polyhedron shape of both the TTMR itself and obstacles. For collision detection and avoiding, Farkas’ lemma is applied. The usefulness is shown by the simulation in the environment with obstacles and the narrow path.
View full abstract
-
Chihaya TSUKADA, Jun ISHIKAWA
Session ID: 1A1-E09
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This article proposes a method of constructing a disturbance observer based on a recurrent neural network (RNN) inverse dynamics model. The proposed method learns the inverse dynamics characteristics of the system by inputting the input / output relationship of the nonlinear system as the feature vector of the RNN, and from the difference between the torque generated when the actual machine is operated and the torque output from the RNN. As a result of comparing and verifying the estimation accuracy with the conventional linearized disturbance estimator, we were able to obtain the same estimation result as the conventional method regardless of how the disturbance is applied.
View full abstract
-
Ryuichiro TSUNODA, Mitsuhiro KAMEZAKI, Peizhi ZHANG, Sahil SHEMBEKAR, ...
Session ID: 1A1-E10
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
The purpose of this study is to develop a basic control system that enhances mechanical and geometric adaptability for magnetorheological fluid (MRF) robot arms with high backdrivability and high output power, which we have developed in a previous study. A prototype control system was developed to control the force of the endpoint by controlling pressure in the actuator. The results of gravity compensation and surface copying motion experiments were conducted. The experimental results showed that proposed controller could effectively control the MRF robot arm.
View full abstract
-
Yasuhiko FUKUMOTO
Session ID: 1A1-E11
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this paper, a deep Q-learning is applied to realize a force control for a touching motion. The impedance control is generally used in case that a robot contacts to an environment. However, the robot keeps bouncing on the surface of the touching object, if the approaching speed is not slow enough. Therefore, we attempted to realize a higher approaching speed without a bouncing and developed a novel force controller based on a deep Q-learning. This controller decides the velocity command values based on the force value acting on the robot, the velocity of the robot and the past velocity commands. The controller was tested by an experiment. A performance exceeding an impedance control was realized after the 19840 trials.
View full abstract
-
Hirokazu Ishida, Kei Okada, Masayuki Inaba
Session ID: 1A1-E12
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In nonlinear-optimization-based online trajectory planning, warm-started replanning using a good initial solution speeds up solving time significantly. However, the computational load required to do such online replanning and even solvability are unknown to a user. Such a lack of guarantees hinders robots to be applied to real-life tasks. In this paper, we focus on the fact that in many daily life settings, manipulation is done with known environments. Taking advantage of this observation, we propose a method to generate a trajectory library with which computational load’s upper bound and solvability in the online replanning phase is guaranteed. Note that the trajectory library generation takes place offline. We applied our proposing method to reaching tasks into the refrigerator. We showed that offline trajectory generation is properly done. Finally, we performed a real-robot demonstration of an online replanning using the generated trajectory library.
View full abstract
-
Kousuke Okabe
Session ID: 1A1-E13
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Dynamic manipulability ellipsoid is a measurement scale that represent group of tip accelerations limited by joint driven torques. The dynamic manipulability ellipsoid is translated by effects of gravity and joint velocities. Previously, we derived a translating vector that dynamic manipulability polytope is translated by motion velocity on an extended task space coupling with task space and internal motion space. In this time, we confirm translation of dynamic manipulability polytope by internal motion velocity using an actual planar 3-joints redundant manipulator.
View full abstract
-
Yuliu Wang, Yusuke Yoshiyasu, Eiichi Yoshida
Session ID: 1A1-E14
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We propose a deep reinforcement learning method to learn to produce grasping strategies using 3D point clouds as input. We facilitate a training process by using a tensor-based distributed training framework to perform many trial-and-errors with a wide variety of objects. Our approach greatly increases the number of object categories that can be handled and exhibits a strong generalization ability to grasp unknown object categories.
View full abstract
-
Kyohei UNUMA, Yusuke YOSHIDA, Satoshi HOSHINO
Session ID: 1A1-E15
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In order for mobile robots to move autonomously, collision avoidance is an essential capability. Thus far, end-to-end motion planners based on Deep Neural Network and Convolutional Neural Network have been proposed. However, robots based on these planners through imitation learning sometimes fail to avoid obstacles in unknown environments. This is due to generalization performance of the planners. In order to improve the generalization performance, we propose a novel motion planner based on DNN and CNN using multi-task learning. Through the experiments, we show the effectiveness of the proposed motion planner for collision avoidance by comparing the generalization performance of DNN and CNN.
View full abstract
-
Takumi Shinzaki, Daisuke YASHIRO, Kazuhiro YUBAI, Satoshi KOMADA
Session ID: 1A1-E16
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
A state estimator that uses both a position sensor and an acceleration sensor is used to improve the position control performance of the motor-driven system. If the sampling cycle of the position sensor is slower than the control cycle and the measurement delay is large, the position cannot be estimated accurately, resulting in deterioration of control performance. Therefore, in this paper, we propose to apply the Smith method to a controller that uses a state observer using both a camera and an accelerometer in order to improve control performance. The validity of the angular controller using the Smith method with state observer is verified by simulations and experiments.
View full abstract
-
Keito SUGAWARA, Masahiro AITA, Toshiaki TSUJI
Session ID: 1A1-E17
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This paper proposes grinding motion generation based on human motions by variational autoencoder, a generative model based on neural networks. Unlike robots, human motions are not always the same even if they are repeated for the same purpose. By generating motions that take this difference into account, it is possible to make the robot perform grinding tasks with motion diversity. The proposed method uses variational autoencoder to learn human grinding motions. In order to generate a long time motion, the task was divided by using two variational autoencoders. The proposed method can generate grinding motions with motion diversity.
View full abstract
-
-Self-recognition of failure in plan execution-
Haruyoshi KAWASE, Kosuke SEKIYAMA, Khusniddin FOZILOV
Session ID: 1A1-E18
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In the real world, autonomous robots need to react to unexpected events such as action failures and exogenous events. In this paper, we focus on recognition of action results and error recovery. We implemented real-time robot planning system using Behavior Tree. It monitors the state and plans the order of actions and switches the plan depending on the success or failure of nodes. Therefore, it is necessary to recognize failures and to detect discrepancy in the plan caused by problems in determining success. We experimented Pick and Place using this system and show that it can error recovery when it failed to place.
View full abstract
-
Ryosuke KAWANISHI, Eriko SAKURAI, Motoki KIMURA, Hiroyuki OKA, Yoshihi ...
Session ID: 1A1-F01
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In the field of robotics, robotic bin picking has been studied extensively. In this paper, we consider an irregularly shaped object as a grasping target. For detecting irregularly shaped objects, instance segmentation based on deep learning is one of the promising methods. One of the challenges for deep learning-based methods is how to reduce the time and effort to prepare the dataset for training. In this paper, we propose a method to automatically generate a dataset for learning instance segmentation using only information available from public image databases. The proposed method achieves mean average precision (mAP) of 0.85 for the automatically generated test data. It also showed mAP of 0.65 for the test data generated using untrained irregularly shaped objects, and achieved a success rate of more than 98% in picking experiments with the robot.
View full abstract
-
Yuga NAKAMURA, Weiwei WAN, Keisuke KOYAMA, Kensuke HARADA
Session ID: 1A1-F03
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We propose a framework for environmental caging for manipulating objects that cannot be directly grasped by hand. we utilize the environmental contact under gravity to relax the conditions required to achieve the caging manipulation. we search for the equilibrium state of an object for given finger position. We determine if each equilibria can be connected and create a graph network. We search the network and retrieves the finger trajectory from the initial state to the target state. By moving a finger along the retrieved path, we achieve lifting of an object by experiment by using a dual-arm manipulator.
View full abstract
-
Kento NAKATSURU, Weiwei WAN, Keisuke KOYAMA, Kensuke HARADA
Session ID: 1A1-F04
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this paper, we propose a method for manipulating a large object that is difficult for a robot to lift, by supporting it with the environment without grasping it completely. We propose a method of motion planning based on the path obtained by constructing and searching a graph network whose elements are the states of contact that occur between the object and the environment. In order to demonstrate the usefulness of the proposed method, we conducted experiments on actual machines for pulling a large object up to a worktable and rotating it on the worktable. From the experimental results, it was found that by adjusting the edge connections and the cost settings, it was possible to plan the path without placing a large load on the robot.
View full abstract
-
Ryota Yashima, Akihiko Yamaguchi, Koichi Hashimoto
Session ID: 1A1-F05
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this study, we explore a systematic debugging method for model-based reinforcement learning where a library of skills is introduced. When the performance (learning speed, obtained quality of behavior) of model-based reinforcement learning is not sufficient, identifying the reason is difficult especially when the dynamics are complicated such as liquid pouring. In our previous work, we introduced a library of skills in reinforcement learning of such complicated tasks. We think that the use of a skill library is also beneficial to investigate the performance issues since we can test each subset of skills separately. Our goal is making a systematic debugging way of reinforcement learning based on this idea. This paper reports a preliminary development toward this goal where we repeatedly increase and decrease the complexity of a subtask to make debug easier like curriculum learning until we can obtain sufficient results with the original task. We conducted simulation experiments of liquid pouring to investigate this approach. The results show a performance improvement.
View full abstract
-
Seita NOJIRI, Akihiko YAMAGUCH, Yoshiyuki SUZUKI, Yosuke SUZUKI, Tokuo ...
Session ID: 1A1-F06
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
A friction-variable mechanism is useful in manipulation tasks. However, the conventional methods have limitations such as a lack of sensory systems. In this study, we develop a new friction variable surface that is compact, easy to manufacture, and is capable to directly observe the contact surface. Therefore, we propose a friction-variable surface that utilizes the elastic deformation of protrusions. By utilizing the change in contact area due to the elastic deformation of the protrusion, the friction is changed with a simple structure, enabling miniaturization. This protrusion can be fabricated by 3D printers with transparent elastic materials, which makes manufacturing easier. We also introduce a camera to observe the contact surface through the transparent skin.
View full abstract
-
Yoshiyuki OYAMA, Ken MASUYA, Masafumi OKADA
Session ID: 1A1-F08
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
It is effective to move objects by throwing since it can transport the objects in wide area with short time. To realize an appropriate throwing, the throwing accuracy should be high for practical applications. In this paper, by using sensitivity analysis, the minimum set of dynamic parameters of 3-DOF throwing manipulator is optimized and optimal feedforward torque is obtained. Focusing on a planar 3-DOF manipulator, the mininuim set of dynamic parameters is regarded as a stochastic variable. By using sensitivity analysis, the parameters are identified so that it has pre-defined covariance which satisfies high of landing points. By the experiments,the effectiveness of the proposed method is evaluated.
View full abstract
-
Shumpei WAKABAYASHI, Shingo KITAGAWA, Kento KAWAHARAZUKA, Takayuki MUR ...
Session ID: 1A1-F09
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In the research of object grasping, systems that output consistent results from recognition to grasp motion have been actively studied. Usually, a single grasp point is determined even though an object such as tableware has redundancy to be grasped. In addition, it is difficult to reflect the input constraints due to the robot’s hardware or the surrounding environment. In this study, we propose a neural network that modifies the grasp pose around the initial pose from visual information and the actual trial. Our system can autonomously collect supervised data so that the robot can learn by itself. Since the search points are narrowed down to the edge points of the object, the real robot can efficiently acquire the grasp ability in fewer trials. As a result, it can grasp unknown objects, and flexibly change its grasp position because the input can be easily constrained.
View full abstract
-
Tsubasa MURYOE, Yosuke SUZUKI, Tokuo TSUJI, Tetsuyou WATANABE
Session ID: 1A1-F11
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This paper proposes a method for generating robot motions from teaching including failures. When the general public generates robot motions by teaching, it is assumed that failed trajectories will be repeated until the desired trajectory is obtained. As a result, trajectories containing failures will be generated. In this research, we try to establish a methodology to extract only the trajectories that can realize the desired task from the trajectories that contain failures. Based on the segmentation of actions and the assumption that the successful action is the last action before the action changed significantly, we generated trajectories consisting only of successful actions
View full abstract
-
Yuki SAKATA, Takuo SUZUKI
Session ID: 1A1-F12
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Robotic vacuum cleaners have become widespread to sweep the floor. It is important to wipe objects on the floor to clean the house up, but there are some curved objects such as a toilet bowl or a kitchen sink. If the curved surface of an object can be expressed mathematically, the trajectory to wipe the surface will be obtained. In this research, a mobile manipulator with an RGB-D camera was selected to collect point cloud data and handle cleaning tools. The point cloud model of a toilet bowl was created, and the trajectory using a B-spline curve was generated. In experiments, Toyota ’s Human Support Robot was used, and the cleanness of the toilet bowl was evaluated by reference to the rule of a robot competition (i.e., World Robot Summit). As a result, the authors confirmed that the scrubbing movement, part of the wiping movement, should be improved.
View full abstract
-
Yuta KURIHARA, Hideaki YAGI, Ryo KOBAYASHI, Satoshi HOSHINO
Session ID: 1A1-G01
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this paper, we propose a localization method using MCL and NDT scan matching in a hybrid manner. In this method, a cadastral map is used for MCL. However, the localization accuracy depends on the map. For this problem, we further apply online SLAM. Through NDT scan matching, the robot simultaneously build another map. In this regard, however, such front-end SLAM accumulates localization and mapping errors. For this problem, the localization error based on the NDT scan matching is partially corrected using the MCL. Through the navigation experiments, we show that the robot based on the proposed localization method is able to autonomously move toward the destination.
View full abstract
-
Evaluation of Localization Performance in Traffic Congestion Using a Simulator
Yuma MURAMATSU, Yudai YAMAZAKI, Yoshiki NINOMIYA, Yuki KITSUKAWA, Juni ...
Session ID: 1A1-G02
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Many autonomous vehicles use 3D-LiDAR to perform localization. Localization accuracy of these autonomous vehicles is greatly affected by LiDAR performance, LiDAR mounting position, and surrounding environment. Because localization requires high reliability, we need to know the accuracy that autonomous vehicles can achieve in that location. Therefore, we must repeatedly perform driving tests and simulations using raw data in that environment. However, we need a lot of cost and effort to do these. So, we evaluate LiDAR performance using raw sensor data. In addition, we evaluate the sensor mounting position using sensor data generated in the virtual city area of the sensor simulator. Based on these evaluation results, we will verify the effect of LiDAR performance and LiDAR mounting position on Localization accuracy.
View full abstract
-
Naoki AKAI, Takatsugu HIRAYAMA, Hiroshi MURASE
Session ID: 1A1-G03
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This paper presents a hybrid localization method using model- and learning-based methods. Monte Carlo localization (MCL) is used as a model-based method. End-to-end (E2E) learning is used to implement a learning-based localization method. Monte Carlo dropout is applied to the E2E localization and its output is treated as a probabilistic distribution. This distribution is then used as a proposal distribution and the E2E localization estimate is fused with MCL via importance sampling. Experimental results show that both the advantages are simultaneously leveraged while mitigating their disadvantages.
View full abstract
-
Hiroki YASUMOTO, Toshiyuki TANAKA
Session ID: 1A1-G04
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Probabilistic robotics uses Bayesian formulation to solve global localization. An example is Monte Carlo localization (MCL), which spreads particles uniformly over an environment. While KLD-sampling may adaptively decrease the number of particles as the localization proceeds, its first measurement update would take relatively long computation time. On the other hand, approaches that use Hough Transform may have computational advantage. But the way they vote and localize seem to be heuristically determined. In this paper, the method that combines both approaches is proposed. The proposed method computes likelihood of measurement model based on Hough Transform and uses MCL to localize robots. A simple simulated experiment suggests that the proposed method may compute the first measurement update faster than MCL with the usual way to compute likelihood, if particles are initially placed according to a grid on a pose space.
View full abstract
-
-Realization of seamless location estimation by switching methods according to the environment-
Yuta HODA, Junta MATSUO, Kenya TAKEMURA, Osamu SEKINO, Junichi MEGURO
Session ID: 1A1-G05
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this paper, we propose a combined positioning method using GNSS and LiDAR. Conventional positioning methods such as GNSS and LiDAR have a problem that the estimation accuracy depends on the environment. In addition, it is difficult to combine GNSS and LiDAR because they have different coordinate systems. In this paper, we unify the coordinate systems by using a method that assigns absolute position information to 3D point clouds. In addition, we optimize the position estimation by switching the estimation method according to the environment and combining them by EKF. In this paper, we propose a new method to estimate the position of a vehicle by using LiDAR and GNSS. In this paper, we also evaluate the performance of the proposed method in position estimation and test its application to control to confirm its effectiveness.
View full abstract
-
Kein MATSUI, Tadahiro HASEGAWA, Kaito ICHIHARA, Takumi ISHII, Shin’ich ...
Session ID: 1A1-G06
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We have created a 3D environment map building system by using the vehicle equipped with dual frequency RTK-GNSS, IMU and 3D LiDAR to utilize for surveying. There are mainly two factors to build a 3D environment map, namely 3D self-localization and coordinate transform of PCD. First, accurate 3D position and orientation of the LiDAR was estimated successfully by using dual frequency RTK-GNSS and IMU, while driving at 10 to 30 km/h. As the base station for the RTK-GNSS, the high accuracy position data delivery system “ichimill” was used in this experiment. Second, a highly accurate 3D environment map was created successfully by superimposing PCD that was coordinate transformed based on 3D self-localization data. The experimental results showed that a highly accurate 3D environment map can be created only by driving a vehicle and acquiring sensor data. This system is expected to be applied in areas where drone surveying is difficult.
View full abstract
-
Masatoshi MOTOHASHI, Takashi KUBOTA
Session ID: 1A1-G07
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This paper presents a navigation selection method for planetary rover. Since it is impossible for operator to control rover remotely due to communication constraints, the rover is required to have the ability to navigate itself to the destination. However, the detailed mode of each navigation function needs to be set by operators, and human intervention is required every time environment changes. In order to realize more efficient exploration, it is necessary for the rover itself to select a navigation mode adapted to the environment. This paper proposes a method to select an appropriate navigation mode from images taken by rover using deep learning, and evaluates the validity of the proposed method.
View full abstract
-
Hiroto SATO, Kousuke UHIYAMA, Fumio ITO, Manabu OKUI, Rie NISHIHAMA, T ...
Session ID: 1A1-G08
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In the inspection of sewage pipes, it is necessary to prepare a pipeline map in order to efficiently identify the damaged areas, which are often lost so it is necessary to prepare a pipeline map at the same time as the inspection. The authors have previously developed a peristaltic robot for sewer pipe inspection and have estimated the pipe shape using IMU sensors. However, the accuracy of the estimated map was not enough to be put into practice. In this study, we try to improve the estimation accuracy by measuring the information of the position and orientation of the start and end points of the pipeline to constrain the estimated pipeline shape. The authors represent the pipe geometry with only three parameters, and the correction is performed by iterative calculation using the inverse Jacobian matrix.
View full abstract
-
Hibiki KAWAI, Yoji KURODA
Session ID: 1A1-G09
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Differ from extracting feature description such as SIFT and SURF, semantic segmentation has strong robustness to optical changes such as cross-seasonal and day-night changes. In this paper, we propose advanced visual localization method using semantic segmented images and a mesh map with semantic information made from annotated LiDAR scan data and have solved the problems of previous study. In localization phase, we use traditional Monte-Carlo Localization and calculate likelihood by comparing segmented image from on-board camera and an image of the mesh map landscape as seen from a possible predicted location. This method achieved practical localization accuracy with keeping the benefit of semantic segmentation. A source code used in this experiment is available at following github page. github.com/amslabtech/semantic mesh localization
View full abstract
-
Yasunori HIRAKAWA, Yoji KURODA
Session ID: 1A1-G10
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this paper, we propose a new localization method combining visual place recognition with MCL. It is difficult for conventional localization methods to localize in the absence of prior position information. This is especially true in environments with poor geometric features because most of them rely on only geometric information when they localize. Therefore we reflected the results of similarity-based visual place recognition to MCL. MCL is a probabilistic and geometric localization method using LiDAR. An experiment was conducted in a simulation environment to compare MCL using only geometric information and the proposed method using both geometric and visual information. The experimental results show that our method is superior in terms of accuracy and convergence speed.
View full abstract
-
Kaito ICHIHARA, Tadahiro HASEGAWA, Shin’ichi YUTA, Yoshihide NARUSE, H ...
Session ID: 1A1-G11
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We have developed the guidance robot "EM-Ro" for the ECO35 muffler museum, and already realized in both the visitor-escort method with prepared route and visitor-following method to guide visitors as they move around. In this paper, the navigation system for guidance robots that can switch these guidance methods was developed successfully. The waypoint navigation is applied to both the visitor-escort and visitor-following method. Therefore, switching between the prepared or visitor-derived waypoints can make visitors choose the guidance method they prefer. Visitors are able to switch the guidance method anytime, by giving EM-Ro the request from the remote controller. The experimental results showed that the navigation system enable to switch guidance method seamlessly.
View full abstract
-
Kota SHIMADA, Takumi MATSUDA, Yoji KURODA
Session ID: 1A1-G12
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this paper, we propose a jerk-suppressed action planning method for mobility robots or transfer robots using deep reinforcement learning by considering human-to-human and human-to-robot interactions. In a dynamic environment such as a station or an airport, surrounding situations are complex, so there is also a situation where a rule-based planning has difficulties in dealing with. We aim for safe and smooth action planning by using deep reinforcement. In this paper, we propose a jerk-suppressed action planning method for mobility robots or transfer robots using deep reinforcement learning by considering human-to-human and human-to-robot interactions. In a dynamic environment such as a station or an airport, surrounding situations are complex, so there is also a situation where a rule-based planning has difficulties in dealing with. We aim for safe and smooth action planning by using deep reinforcement.
View full abstract
-
Yuuki FUJISAKI, Hiroyuki KOBAYASHI
Session ID: 1A1-G13
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this paper, the authors propose a method that changes the existing illumination light individual identification method CEPHEID, which used the classification neural network model, to a recurrent neural network model. In their previous method, it is difficult to improve the resolution of position estimation because the resolution of position estimation depends on the spatial interval of the lighting equipment. Then, the authors newly idea that the resolution of position estimation could be improved by using a regression neural network. Finally, implemented it on an AI-capable MCU and conducted a self-position estimation experiment.
View full abstract
-
Yoshikazu EBINA, Akio YASUDA, Masato MIZUKAMI, Naohiko HANAJIMA, Yoshi ...
Session ID: 1A1-G14
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
When inspecting infrastructure buried in the ground such as pipeline, it is necessary to investigate the existence of other buried objects. As a method studied for that purpose, there is a method in which a hand-operated measuring carts with a ground penetrating radar is moved on the ground in an area to be investigated for measurement. It is expected that information can be effectively utilized by estimating the position of the measuring carts and correlating it with the data of the ground penetrating radar. In this paper we propose self-position estimation method using video information of road surface. Furthermore, we verify the effectiveness of the proposed method by demonstration experiment.
View full abstract
-
Kousuke SHIZUKU, Rahok SAM ANN, Tatsuya KITANO, Kazumichi INOUE
Session ID: 1A1-G15
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
SLAM is one of the most popular methods used to build a map for mobile robots. However, it has a cumulative error due to wheel slippage and uneven floor, and the built map may show a different location from actual location. In this research, we use a method called loop closure to eliminate the cumulative error. Furthermore, we add information of magnetic direction on the map and use it to correct the heading direction of the mobile robot during the autonomous navigation to make the autonomous navigation method more robust.
View full abstract
-
-Improvement of Self-localization Ability by Removing Roads and Sky Regions-
Nobuhiko Matsuzaki, Sadayoshi Mikami
Session ID: 1A1-G16
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Self-localization is essential for navigation and is generally done by GPS at outdoor. However, GPS tends to cause large errors where radio reflection occurs, such as in urban areas, which sometimes prohibits precise self-localization. Meanwhile, a human may collate hisher surroundings with street view images when grasping the current location. To implement this, we have to solve image matching between the current scene and the images in a street view database. However, since the field angle, time, and season between images differ widely, standard pattern matching by feature is difficult. DeepMatching can precisely match images that have differences in lightings and field angles. Nevertheless, DeepMatching tends to misjudge street images because it may find unnecessary feature points in the road and sky. This paper proposes a method that gains image similarity with features like building by excluding road and sky. This paper also investigates appropriate parameters through experiments using various images and resolutions.
View full abstract
-
Rui FUKUSHIMA, Yusuke YOSHIYASU
Session ID: 1A1-G17
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This paper presents a target-driven visual navigation technique that can exploit long-term history for navigating an agent to a given target image. In particular we use Transformer architecture that has been developed in the natural language field and can handle long-term temporal dependencies. Experimental results showed that the use of Transformer improves the navigation performance to new target images by utilizing long-term history and also improves the data efficiency, especially in large-scale environments. We also conducted an ablation study to show how the number of training frames affects the navigation performance. This results in the accuracy of the proposed method improving while the baseline decreases as the number of training frames increases.
View full abstract
-
Kotaro WADA, Yuichi TAZAKI, Hikaru NAGANO, Yasuyoshi YOKOKOHJI
Session ID: 1A1-G18
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this paper, we propose a method for mapping a pose graph map using proximity points as feature points by integrating multiple observation data. In this method, remove quasi-static objects which changes location or existence for each observation data, and avoid incorrect removal of static objects, for improve the accuracy of localization. In addition, we confirme that the proposed method can remove quasi-static objects, by mapping the actually observed data. In addition, the accuracy of localization by the map constructed by the proposed method is compared with the accuracy of one without removing quasi-static objects, to show the usefulness of the proposed method.
View full abstract
-
Toshinari TANAKA, Taiki MASUDA, Ryunosuke SAWAHASHI, Manabu OKUI, Rie ...
Session ID: 1A1-H01
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
There are many force feedback devices that focus on upper limbs. However, in the real world, not only the upper limbs get force feedback. So, the authors consider that rendering force sensation to the lower limbs as well will further improve the sense of immersion in the VR space. Therefore, the authors attempt to develop a wearable lower limb force feedback device that enables user to move around in wide area. In order to render the sensation of dropping in the VR space, the authors have been evaluating the sensation of dropping in humans as a first step of the device development. Based on the evaluation results, the authors determined the required specifications of the device.
View full abstract
-
Manabu OKUI, Takumi YASUI, Rie NISHIHAMA, Taro NAKAMURA
Session ID: 1A1-H02
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Various haptic devices for application to virtual reality have been developed. Most of them have to be stationary on a desk, thus users cannot move around. To overcome this shortcoming, we propose a wearable force feedback device that uses air jet. In this paper, a prototype for hand position guidance which is able to provide force in any direction is developed, and its performance is evaluated by experiments.
View full abstract
-
Yusuke HIGASHI, Tetsushi IKEDA, Hiroyuki TAKAI, Satoshi IWAKI
Session ID: 1A1-H03
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
While autonomous wheelchairs reduce the burden on the passenger, the automation of the operation may make it difficult for the passenger to understand the path the wheelchair will take in the future. In this case, passengers may feel anxious or uncomfortable due to unexpected movements of the wheelchair. To reduce passengers' anxiety and discomfort, the authors it is important to present information to passengers about the future path of the wheelchair when multiple people approach from the front. In previous studies, only the direction of the wheelchair's turn was presented. We propose a comfortable and easy-to-understand path presentation method that presents the direction and width of the wheelchair's turn by changing the length of the haptic apparent motion according to the width of the wheelchair's turn. Preliminary simulated wheelchair driving experiments have confirmed the potential of the proposed method to improve passenger comfort.
View full abstract
-
Ryunosuke SAWAHASHI, Iki MAI, Rie NISHIHAMA, Manabu OKUI, Taro Nakamur ...
Session ID: 1A1-H04
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In a virtual reality (VR) space, wearing a head-mounted display can help with the visualization of objects, however, users cannot experience realistic tactile sensations. Recently, several force feedback devices have been developed, including wearable devices that use straight-fiber-type pneumatic muscles and magnetorheological fluids. This allows the devices to render elastic, frictional, and viscous forces during spatially unrestricted movement. Nevertheless, there are two problems. One is that there were items with low scores in the subjective evaluation regarding the discrimination of device weight and force magnitude. The other is the inability to handle many bilateral upper limb movement tasks. Therefore, this study aims to develop a device that can handle movements that interact with both arms.
View full abstract
-
Tomoyuki FUJIWARA, Shunsuke KOMIZUNAI, Atsushi KONNO
Session ID: 1A1-H05
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In order to develop a navigation system using SLAM technology and airborne multi-degree-of-freedom force sensing using pseudo-force sensing based on partial acceleration motion, an evaluation experiment of a navigation system using a force sensing device with only two spatial degrees of freedom for force sensing was conducted. In the evaluation experiment, a user test was conducted, and the accuracy of the system was evaluated by two scales: an objective evaluation by evaluating the success rate, and a subjective evaluation by questionnaire. As a result, the 2-DOF force-sensing device used in the navigation system succeeded in presenting the expected force sensation, and the navigation system was found to have a certain degree of practicality.
View full abstract
-
Koya HIURA, Shun SUZUKI, Yasutoshi MAKINO, Hiroyuki SHINODA
Session ID: 1A1-H06
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
There have been many studies on human motion guidance using tactile information instead of visual information. We can guide the hand without visual information and contact by using midair ultrasonic haptics. However, the conventional method has some problems such as unclear endpoint and limitation of the guiding direction. In this study, we propose a method of guiding a hand to the top of a cone by presenting a tactile virtual cone. And, we evaluate the effectiveness of our method through experiments on subjects. As a result, we were able to guide the subject's hand in the direction of the top of the cone.
View full abstract
-
Akinari SHIBAO, Satoshi SAGA
Session ID: 1A1-H07
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We have been studying a method of producing Immersive Sound Field by air jet to improve the music experience. The problem of this method was that the vibration stimulation by the air jet was less intense than the vibration stimulation of the method using a large transducer. Furthermore, we have investigated vibration control based on the injection time and injection distance of the air jet. We measure user perception changes caused by the change in the output diameter of air jet and stimulus presentation position.
View full abstract
-
Hiroki NISHIHARA, Toshiaki TSUJI
Session ID: 1A1-H08
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Fluid haptic rendering is expected to be applied in various fields, but it is difficult to implement in real-time. This paper proposes a method that uses Feedforward Neural Network (FNN) and Long Short-Term Memory (LSTM) to reproduce the fluid resistance. Although there were differences in the fluid flow between the two neural networks, it was confirmed that the reproducibility of the fluid was better with LSTM. The superiority of LSTM was also confirmed in terms of small error and useful long-term memory. In addition, compared to the conventional method, this proposed method has a higher sampling frequency of 500 Hz, which simplifies the real-time implementation.
View full abstract
-
-Amplitude Modulation and Verification of Usefulness in Generating Vibratory Stimuli with Customization Tool -
Hokuto WATARAI, Jose SALAZAR, Yasuhisa HIRATA
Session ID: 1A1-H09
Published: 2021
Released on J-STAGE: December 25, 2021
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this study, we describe a haptic feedback system using wireless vibration units commercially available, together with a tool for automatic generation of control software for these devices based on the concept of tactons. The amplitude modulation is controlled by changing the drive time of the motor in the device. We use the Reconfigurable Vibrotactile Device Creation Toolkit (ReViCT), which enables intuitive vibration design, and confirm its performance in amplitude modulation. In order to verify the usefulness of the ReViCT and the wireless vibration devices, the same vibration stimuli is generated by subjects using the programming and the wired motor and those using the ReViCT and the devices, and we compare and evaluate the time required and ease of use.
View full abstract