With the development of VR technology and HMDs, many MPs have been developed. However, in real space, it is sometimes difficult to map the movement of the MP to that in the virtual space. In this study, we examined the feasibility of using an MP based on an electric wheelchair to allow passengers to move freely in a virtual space while providing a less uncomfortable experience. As a result of the experiment, we confirmed the possibility of obtaining the same experience as moving in a large space even in a restricted real space by using continuous rotation.
It is important to estimate objects' poses accurately when a robot manipulates them. However, the estimation accuracy of a pose estimator would significantly decrease when the estimator was applied to the actual robot because of inadequate observation of the robot. This paper proposed efficient view planning based on an object's pose inference in a virtual world to improve estimation accuracy. The proposed method was evaluated in both simulation and real environments. The experimental results show that the proposed method reduces the estimation error and achieves high accuracy in grasping tasks.
The efficient and wide-area 3D map building and its accuracy using a vehicle mounted with RTK-GNSS, IMU and LiDAR was experimentally verified. 3D surveying maps are used in construction sites with the shift to ICT. However, there are problems that surveying work generally requires a lot of time and manpower. Therefore, our map building system acquires sensor data while the car is running. Afterward, based on highly accurate 3D self-localization, coordinate transformation of 3D point data is performed to build the 3D map. The experimental results showed the practical use of our system, by comparing it with conventional survey maps.
To inspect the inside of a gas pipe without excavation, we developed a pneumatic driven in-pipe robot that can locomote in deep part of gas piping. When the robot tries to enter the deep part of the gas line, the tension of the air supply tube increases, and the robot cannot locomote. To solve this problem, we developed an elastic flexible actuator that has a strong traction force. We strengthened the traction force to 54[N] by using a reverse winding mesh spring suppressing the rotational movement of the robot. In an evaluation experiment, we confirmed that the robot could locomote at a depth of 50[m] in the gas piping.
The improvement of flexible electrostatic adhesive device (FEAD) that effectively improves the adhesive force on cardboard was verified experimentally, to realize the grasping operation of cardboard boxes used in logistics. By improving the double-layer electrode and the micro-embossed structure of the adhesive surface, the adhesive force was three times higher than that of the previously developed FEAD, it succeeded in lifting a cardboard box weighing 4[kg]. Therefore, the possibility of grasping a cardboard box weighing 10[kg] was confirmed by increasing the electrode area by a factor of 2.5. The experimental results showed the potential application of FEDA for material handling.
This paper addresses reconstruction of visual scenes based on echolocation, aiming to develop auditory scene understanding for robots and systems. Although scene understanding technology with a camera and a LIDAR has been studied well, it is prone to changes in lighting conditions and has difficulty in detecting invisible materials. Ultrasonic sensors are widely used, but their use is limited to distance estimation. There is an unavoidable risk of ultrasonic exposure since most ultrasonic power exists in inaudible frequency ranges. To solve these problems, we propose a framework for echolocation-based scene reconstruction (ELSR). ELSR can reconstruct a visual scene using the transmitted/received audible sound, and it exploits a Generative Adversarial Network (GAN) to learn translation from input sound to a visual scene. As GAN is originally designed for image input, we carefully considered the difference between image and sound input and propose introducing cross-correlation and trigonometric function-based features to input audio features. The proposed framework is implemented based on pix2pix, a kind of conditional GAN, and a dataset for ELSR consisting of 10,800 pairs of input sound and depth images recorded at 28 indoor locations was newly created. Experimental results using the dataset showed the effectiveness of the proposed framework ELSR and audio features.
In this paper, a new force controller that connects admittance and impedance controllers in series is verified experimentally. Until now, the serially combined admittance impedance controller has been proposed and its effectiveness has been demonstrated through only numerical simulations, whereas its practical use has not yet been shown. First, the control law of the proposed force controller is shown compared with the conventional force controllers. Next, to experiment with the proposed method in the simplest case, the 1DoF experimental setup is introduced, and conduct experiments to compare with the performance of the proposed method and present methods under several situations. Finally, Its practical usefulness is demonstrated through these experimental results.