Precise localization is important for a rover on small planetary bodies including asteroids and comets. Conventional localization methods are not adequate on small body surface because of the small body’s irregular shape and small mass. Localization by camera images obtained on-board will not match the resolution of the map made by the spacecraft. Celestial navigation has little accuracy because the gravity direction on the small planetary body can not be used as a standard direction. It is not realistic to arrange multiple navigation spacecraft near the small planetary body like GPS. In this paper, we propose a reasonable localization method of a rover for a small planetary body. It uses two-way range measurement between a rover and a mother spacecraft. The measurements are conducted repeatedly. This method is available to the whole surface of the planetary body while it needs only one spacecraft. Numerical simulations evaluated the localization accuracy on ITOKAWA-size-body whose radius is less than 1[km]. We also analyzed the influence of uncertainties in rotation parameters of a small body and position of a spacecraft on localization accuracy.
This paper proposes a new strategy for making knots with a high-speed multifingered robot hand with tactile sensors and visual sensor. The strategy is divided into three skills: loop production, rope permutation, and rope pulling. Through these three skills, a knot can be made with a single multifingered robot hand. In loop production, the wrist joint angle control is proposed by using visual feedback with high-speed visual sensor. In addition, the dynamics of the rope permutation are analyzed, and an effective tactile feedback control method is proposed based on the analysis. Finally, experimental results are shown using one high-speed multifingered hand with tactile and visual sensors.
We propose a new supervised learning and synthesis framework for fast, complex motor tasks for redundant robots. A statics-based task-space controller acts not only as a full-body motion control module, but also as a module to generate synergistic joint motion patterns for redundant systems. Similar, but faster motions are incrementally synthesized by superposing the task-space controller output and stiffness around the joint trajectories with the modified speed, while iteratively learning the dynamics and joint stiffness according to the L2 norm of the task-space error. We demonstrate the proposed framework by simulating a balanced fast squat on a simple humanoid model.
We discuss an unfolding of fabrics by robotic hands. During unfolding, humans usually allow their fingertips to slip on the fabric surfaces, referred to as a pinching slip motion. We define this motion using differential geometry coordinate. In this motion, the weight of the fabric generates relative movement between the moving fingertips and the fabric, and then, the edges of the fingertips are in contact with the hem of the fabric. We confirm experimentally that the success rate depends on fingertip shapes and grasping force of a robotic hand during the motion. The range between fingertips can be selected widely when fingertips with circular edges are applied. Finally, we demonstrate a fabric unfolding experimentally.
The demand for rehabilitation robots is increasing for the upcoming aging society. Power-assisting devices are considered promising for enhancing the mobility of elderly and disabled people. Other potential applications are for muscle rehabilitation and sports training. The main focus of this research is to control the load of selected muscles by using a power-assisting device, thus enabling “pinpointed” motion support, rehabilitation, and training by explicitly specifying the target muscles. By taking into account the physical interaction between human muscle forces and actuator driving forces during power-assisting, the feasibility of this pinpointed muscle force control is analyzed as a constrained optimization problem. Using our developed power-assisting device driven by pneumatic rubber actuators, the validity of the method is confirmed by measuring surface electromyographic (EMG) signals for target muscles.
Mimesis is the hypothesis that human intelligence originated in the interactive communication of motion recognition and generation through imitation. This is attractive for artificial intelligence. We have developed a mimesis system using Hidden Markov Models (HMMs) and their parameter sets are defined as the proto symbols. In conventional systems, designers have to segment a motion pattern in sequencial motion data to embed the motion pattern in an HMM. However it is necessary to have the ability of motion pattern segmentation in order to autonomously learn and develop through imitation. In this paper, we propose a motion segmentation method that consists of three phases. In the first phase short sequences of motions are encoded. In the second phase the correlation matrix of the encoded sequences are computed. In the third phase motion patterns are segmented based on error between the encoded sequences observed and predicted from the correlation matrix. Moreover we show that it is possible to acquire the proto symbols by providing the mimesis system with the segmented motion patterns.
In human-robot interaction, communication robots must simultaneously consider interaction with a group of people in real environments such as stations and museums. To interact with the group simultaneously, it is important to estimate whether a group’s state is suitable for the robot’s intended task. This paper presents a method that estimates the states of the group of people for interaction between a communication robot and the group of people by focusing on the position relationships between clusters of people. In addition, we also focused on the position relationships between clusters of people and the robot. The proposed method extracts the feature vectors from position relationships between the group of people and the robot and then estimates the group states by using Support Vector Machine with extracted feature vectors. We investigate the performance of the proposed method through a field experiment whose results achieved an 81.4% successful estimation rate for a group state. We believe these results will allow us to develop interactive humanoid robots that can interact effectively with groups of people.
With the developments of humanoid robots, applications in the human living environment are expected. For accurate and safe control of robots, the knowledge of the robot inertial parameters is important. However these parameters are usually not provided by manufacturers, so identification is then an essential step. Conventional identification methods require the joint torque information, however it is difficult to measure the joint torque in the case of the humanoid robots. This paper proposes a method to estimate humanoid robots inertial parameters based on the use of the dynamics of the base-link. Only generalized coordinates of base-link, joint angles and external forces information are required. This paper also presents a symbolic proof for the identifiability of the proposed method. And the method has been tested on the small-size humanoid robot, and experimental results are given.