This paper introduces contribution of the thumb sensation to precisions of human's pinching force control when the human cooperatively pinch an object with the exoskeleton, which augments human pinching force based on a surface electromyography. A human hand should contact with a grasping object directly, while the exoskeleton pushes the human hand for grasping support, because tactile feedback from the hand is very important for human to handle an object stably and dexterously. However the exoskeleton may crush the human fingers with excessive force. The exoskeleton distributes the assistive force to the pinching object in two routes: directly and through a human finger. In our previous study, the accuracy of the pinching force is improved when the distribution factor of the index finger assistance keeps constant. This paper reports improvement of the pinching force when the fixed ratio of the pinching force assistance is applied to the index finger and thumb. In addition, we reveal the contribution of the thumb sensation.
Humanoid robots should be able to handle independently human bodies. In previous studies, robots manipulated human bodies using brute force, but this does not accurately reflect a human's action. Yoshinori Kouno, who has studied several skills related to physical exertion, including martial arts and nursing-care motions possesses, as the subject, we studied these motions and analyzed the techniques that were superior to other body dynamics. In this study, we focused on the pushing action of human, and aimed to understand the body's dynamics. The zero moment point (ZMP) trajectories of the actor varied around the center of gravity (COG), and the actor used the falling momentum to push the recipient's body. Kouno reported that the position of the fingers enabled him to control his arm movements. Hence we analyzed the electromyogram signals of the whole body. Analysis of the recipient's resistance based on the ZMP and COG, suggests that the recipient can predict the attack by the actor of the action and braces him/herself for the impact. Thus, the actor needs to confound the recipient's prediction to realize human pushing motions with less effort. The falling momentum strategy and change in the arm's force play an important role to in confusing the prediction. We confirm the effectiveness of these body dynamics using numerical simulation. Finally, we conclude that the “knack” of the pushing motion is due to a special pose of fingers.
Generating natural motion in robots is important for improving human-robot interaction. We aim the development of a tele-operation system where the lip motion of a remote humanoid robot is automatically controlled from the operator’s voice. In the present work, we proposed a lip motion generation method where lip height and width degrees are estimated from formant information extracted from the speech signal. The method requires the calibration of only one parameter for speaker normalization, so that there is no need of prior construction of user-directed acoustic models. Lip height control is evaluated in two types of humanoid robots (Geminoid-F and Telenoid-R2). Subjective evaluation indicated that the proposed audio-based method is able to generate lip motion with naturalness superior to vision-based and motion capture-based approaches. Partial lip width control was shown to improve lip motion naturalness in Geminoid-F, which also has an actuator for stretching the lip corners. Issues regarding synchronization of audio and motion streams, and online real-time processing are also discussed.
This paper presents a user interface for manipulating structured furniture and electric equipments based on view-based multi-touch gesture interface and demonstrational mechanism for action candidates. The contribution of this papers is summarized as follows: 1) We define multi-touch gesture for pull and rotate manipulation of the robot. 2) We propose demonstrational feedback mechanism of daily environment manipulation. 3) In order to obtain a 3D point that corresponds to the user-touched point on the interface, we show a method to estimate 3D points from screen points of the robot's view images. A prototype system has been implemented using iPad browser and we have evaluated the prototype system on our office and kitchen within environment and showed some preliminary results on usability assessment.
It is preferable that robots have soft outer cover and give themselves fewer impact in contact with humans or surroundings. Although self-protection is one of the most important functions and inevitably required for robust robot architecture, Simple introducing thick “flesh” to a robot tends to impede the movability and generality. In this paper, we have realized soft and thick outer cover by multilayered and distributed softness, and have introduced joint protection by dislocation and auto recover function. Finally, we realized various contact behaviors by the robot with distributed soft cover and joint dislocation ability.
This paper describes a development of new expressive robotic head for bipedal humanoid robot. In order to obtain the representative facial expressions, the authors asked cartoonists to create computer graphics (CG) images. To realize the most representative facial expressions from the CGs, 24-Degrees-of-Freedoms(DoFs) facial expression mechanism was needed. Some of the CGs also required wide movable range of face parts, as well as ability to control color of the face. We designed compact mechanisms and mounted them in the outer case not bigger than the size of average adult Japanese female. We conducted a questionnaire with pictures and videos to evaluate the facial expression ability. The results proved that facial expression recognition rates for 6 basic emotions were significantly increased in new design.
It was clarified that underactuated dynamic bipedal walkers with free ankles can generate fast-convergent level gaits by achieving constraint on impact posture. This paper then identifies the condition for achieving deadbeat gait generation through mathematical investigations of simple rimless spoked walkers and proposes a method for generating fast-convergent gaits based on the obtained results. First, we simplify the transition functions for the state error of the stance phase of a passive rimless wheel by eliminating the steady step period. Second, we consider an active rimless wheel that generates ballistic gaits on level ground and show that the transition function of the stance phase can be specified in the same manner as the passive case. Based on the theoretical results, we design a control law for achieving deadbeat gait generation and investigate the validity through numerical simulations. The physical meaning of the derived transition function is also discussed. Furthermore, we extend the method to an underactuated rimless wheel with a torso by applying a discrete-time output deadbeat control. Through mathematical and numerical investigations, it is clarified that the driving effect that changes from acceleration to deceleration tends to achieve deadbeat generate generation.