As social robot research is advancing, the interaction distance between people and robots is decreasing. Indeed, although we were once required to maintain a certain physical distance from traditional industrial robots for safety, we can now interact with social robots in such a close distance that we can touch them. The physical existence of social robots will be essential to realize natural and acceptable interactions with people in daily environments. Because social robots function in our daily environments, we must design scenarios where robots interact closely with humans by considering various viewpoints. Interactions that involve touching robots influence the changes in the behavior of a person strongly. Therefore, robotics researchers and developers need to design such scenarios carefully. Based on these considerations, this special issue focuses on close human-robot interactions.
This special issue on “Human-Robot Interaction in Close Distance” includes a review paper and 11 other interesting papers covering various topics such as social touch interactions, non-verbal behavior design for touch interactions, child-robot interactions including physical contact, conversations with physical interactions, motion copying systems, and mobile human-robot interactions. We thank all the authors and reviewers of the papers and hope this special issue will help readers better understand human-robot interaction in close distance.
It is well known that, in human communication, physical contact, such as holding hands, has the effect of relieving stress and providing a sense of intimacy. In this study, we verified whether walking hand-in-hand has a positive effect on relationship building between children and robots. Specifically, an interaction experiment was performed in which a child and a robot play one-on-one for approximately 30 min with 37 children aged 5–6 years. The robot is teleoperated by a nursery teacher in this experiment. The children are divided into two groups: the experimental group, in which children walk hand-in-hand during their first encounter with the robot, and the control group, in which children do not have any physical contact with the robot. The change in the interaction is analyzed while taking into consideration the distance between the child and the robot, eye contact rate, and a questionnaire completed by the parents and children. The results reveal that the children in the experimental group interacted significantly with the robot. Moreover, the parents of the children in the experimental group tended to feel that their children appeared to experience intimacy with the robot. These results suggest that walking hand-in-hand has a positive effect on child-robot relationship building.
Based on the little big-five inventory, we developed a technique to estimate children’s personalities through their interaction with a tele-operated childcare robot. For personality estimation, our approach observed not only distance-based but also face-image-based features when a robot interacted with a child at a close distance. We used only the robot’s sensors to track the child’s positions, detect its eye contact, and estimate how much it smiled. We collected data from a kindergarten, where each child individually interacted for 30 min with a robot that was controlled by the teachers. We used 29 datasets of the interaction between a child and the robot to investigate whether face-image-based features improved the performance of personality estimation. The evaluation results demonstrated that the face-image-based features significantly improved the performance of personality estimation, and the accuracy of the personality estimation of our system was 70% on average for the personality scales.
The purpose of this paper is to provide an overview of the recent research on android robotic media with a focus on its effects on older adults and to present a discussion on the implications of the experimental results. Social isolation of older adults is a leading issue in healthcare. Patients with dementia experience symptoms, such as agitation, which can result in increasing care burden. Android robotic media have been shown to provide a feeling of safety and communication support to older adults. In previous case studies, an increase in prosocial behaviors was observed in participants with dementia; however, the media effects needed to be measured by assessment scales. The current results indicate the effects of robotic media on dementia symptoms, especially the decrease in patients’ anxiety. As demonstrated in another experiment, anxiety reduction can also be expected in healthy older adults; however, certain conditions may be required for both healthy older adults and those with dementia. Key factors for the media effect, namely, dementia type, user’s personality, and personalized dialogue are taken into consideration for the further development of robotic systems. Additionally, we further discuss the significance of long-term data collection, especially from the early life stages because teleoperation or autonomous systems are expected to utilize information that can affect the effect of robotic media.
This study addresses the effects of a robot’s awareness and its subtle reactions toward the perceived feelings of people who touch a robot. When another unexpectedly touches us, we subtly and involuntarily react. Because such reactions are involuntary, it is impossible to eliminate them for humans. However, intentionally using them for robots might positively affect their perceived feelings, in particular, when a robot has a human-like appearance that evokes human-like reactions. We investigate the relationship between subtle reactions and the awareness of the existence of a human, i.e., whether a robot’s awareness and its subtle reactions influence people’s impressions of the robot when they touch it. Our experimental results with 20 participants and an android with a female-like appearance showed significant effects between awareness and subtle reactions. When the robot did not show awareness, its subtle reaction increased the perceived human-likeness. Moreover, when the robot did not show subtle reactions, showing awareness beforehand increased the perceived human-likeness.
This study investigates the effects of the touch characteristics that change the intimacy perceived by humans in human-robot touch interaction with an android robot having a human-like feminine appearance. Past studies on human-robot touch interaction focused on understanding which types of human touches are used to express emotions to robots. However, they less focused on how a robot’s touch characteristics can affect humans’ perceived intimacy. In this study, first, we concentrated on two types of touch characteristics (type and place) and their effects on the perceived intimacy of a commonly used emotion in human-robot interaction, namely happiness. The results showed that the touch types are useful for changing the perceived intimacy, although the touched place did not exhibit any significant effects. Based on the results of our first experiment, we investigated the effects of different touch characteristics (length and part). We concluded that the touch part is useful to change the perceived intimacy, although the touch length did not exhibit any significant effects. Finally, the results suggested that a pat (type) by the fingers (part) is a better combination to express intimacy with our robot.
Owing to manpower shortages, robots are expected to be increasingly integrated into society in the future. Moreover, robots will be required to navigate through crowded environments. Thus, we proposed a new method of autonomous movement compatible with physical contact signaling used by humans. The method of contact was investigated before using an arm with six degrees of freedom (DoF), which increases the cost of the robot. In this paper, we propose a novel method of navigating through a human crowd by using a conventional driving system for autonomous mobile robots and an involute-shaped hand with an one-DoF arm. Finally, the effectiveness of the method was confirmed experimentally.
This paper reports the effects of communication cues on robot-initiated touch interactions at close distance by focusing on two factors: gaze-height for making eye contact and speech timing before and after touches. Although both factors are essential to achieve acceptable touches in human-human touch interaction, their effectiveness remains unknown in human-robot touch interaction contexts. To investigate the effects of these factors, we conducted an experiment whose results showed that being touched with before-touch timing is preferred to being touched with after-touch timing, although gaze-height did not significantly improve the feelings of robot-initiated touch.
Although current communication media facilitate interactions among individuals, researchers have warned that human relationships built through these media tend to lack the level of intimacy acquired through face-to-face communications. In this study, we investigate how the long-term use of humanlike communication media affects the development of intimate relationships between human users. We examine changes in the relationship between individuals while they converse with each other through humanlike communication media or mobile phones for approximately a month. The intimacy of their relationship was evaluated using the amount of self-disclosed personal information. The result shows that a significantly greater amount of self-disclosure is made through a communication medium with humanlike appearance and soft material compared with the use of a typical mobile phone. The amount of self-disclosure showed cyclic variation in the experiment through humanlike communication media. Furthermore, we discuss a possible underlying mechanism of this effect from the misattribution of a feeling caused by intimate distance with the medium to a conversation partner.
This paper focuses on “play-biting” as a touch communication method used by robots. We investigated an appropriate play-biting behavior and its effect on interaction. The touching action has positive effects in human-robot interactions. However, as biting is a defenseless act, it may cause a negative effect as well. Therefore, we first examine biting manner and the appearance of the robot using a virtual play-biting system in Experiment 1. Next, based on the result of experiment, the play-biting system is implemented in a stuffed animal robot. We verified the impressions created by the robot and its effect on mitigating stress in Experiment 2. Consequently, the play-biting communication gave positive and lively impression, and effect of reducing the physiological index of stress, in comparison to only touching the robot.
When a robot works among people in a public space, its behavior can make some people feel uncomfortable. One of the reasons for this is that it is difficult for people to understand the robot’s intended behavior based on its appearance. This paper presents a new intention expression method using a three dimensional computer graphics (3D CG) face model. The 3D CG face model is displayed on a flat panel screen and has two eyes and a head that can be rotated freely. When the mobile robot is about to change its traveling direction, the robot rotates its head and eyes in the direction it intends to go, so that an oncoming person can know the robot’s intention from this previous announcement. Three main types of experiment were conducted, to confirm the validity and effectiveness of our proposed previous announcement method using the face interface. First, an appropriate timing for the previous announcement was determined from impression evaluations as a preliminary experiment. Secondly, differences between two experiments, in which a pedestrian and the robot passed each other in a corridor both with and without the previous announcement, were evaluated as main experiments of this study. Finally, differences between our proposed face interface and the conventional robot head were analyzed as a reference experiments. The experimental results confirmed the validity and effectiveness of the proposed method.
With the remarkable development of related technologies, the number of robots has been gradually increasing and their presence is becoming much more familiar in our daily lives. The motion copying system (MCS) is utilized as the method for conducting some tasks by robots. This system enables tasks to be reproduced when the environmental conditions are not changed. The task reproduction performance is degraded when environmental variations occur, and human-like adaptable motion is expected to be developed in the MCS. This study reveals the dominant element of motion, and the control strategy is varied at each time in each axis by considering the task realization. The flexibility of motion is learned from both the operator and the task implementation. The task reproduction experiments by MCS are conducted to verify the effectiveness of the proposal.
In human-human interaction, social touch provides several merits, from both physical and mental perspectives. The physical existence of robots helps them reproduce human-like social touch, during their interaction with people. Such social touch shows positive effects, similar to those observed in human-human interaction. Therefore, social touch is a growing research topic in the field of human-robot interaction. This survey provides an overview of the work conducted so far on this topic.
Many wearable devices have been developed and are being currently used, owing to the miniaturization of computers and electronic devices and advancements in calculation processing algorithms. They have various uses and forms, for example, a power assist robot for reducing the burden of work, a wearable sensor for measuring the level of activity and health condition of people and animals, and so on. In Japan, wearable devices have attracted attention as an important technology in a human-centered society (Society 5.0) and can help realize economic development and address social problems. A society that can benefit from a wide range of wearable devices is being realized. This special issue covers robotics and mechatronics technologies for next generation wearable devices to realize such a society, including wearable systems and their elemental technology, AI, IoT, and other relative technologies.
We sincerely thank the authors for their fine contributions and the reviewers for their generous time and effort. We would also like to thank the Editorial Board of the Journal of Robotics and Mechatronics for their help with this special issue.
During overhead work, workers need to keep raising weights of approximately 2 to 4 kg with the muscular strength of their upper limbs, and the burden of this work is high. Therefore, we developed an assistive device, named TasKi, using a self-weighted compensation mechanism to reduce the burden on upper limbs during overhead work. It can compensate for upper limb weight using the force of a spring in various postures of the upper limbs, without a battery. In this study, to provide effective assistance to many users, we clarified the crucial assistance and parameter adjustment range of settings corresponding to physical differences. First, the assistive force value of TasKi to reduce the work burden of each user was confirmed via a subjective evaluation experiment and myoelectric potential measurements. Next, we conducted a test survey of TasKi users and investigated the relationship between physique and the wearing feeling. According to the survey, 80% of the subjects provided favorable opinions on the assistive method used by TasKi. Finally, we had subjects of various physiques wear the device and investigated the relationship between physique and the wearing feeling with respect to shoulder joint movements. It was observed that the subjects with greater shoulder widths experienced difficulties when moving in the direction of internal-external rotation because of the small size of TasKi. The influence on the ease of motion and perception of size was less in the direction of flexion-extension and adduction-abduction motions.
This paper describes the development of an angle-sensorless exoskeleton with a tap water-driven artificial muscle actuator. The artificial muscle actuator consisted of an elastic rubber tube reinforced by braided fiber. Such actuators are highly flexible, lightweight, and water-resistant, and thus are inherently safe even for operations in direct contact with humans. An estimation system for the displacement of the artificial muscle actuator based on the water flow rates detected by flowmeters was constructed for the water-hydraulic exoskeleton. In addition, estimators of the velocity and acceleration of the actuator based on the estimated displacement and the measured flow rates were derived and incorporated into the estimation system. With this system, our previous wearable upper-limb assistive exoskeleton prototype was converted into an angle-sensorless version with higher safety in wet conditions. Its assistive performance was evaluated through experiments with research participants. Experimental results demonstrated that muscle activity could be reduced, although an assistive control strategy was executed with the variables estimated, excluding force.
This paper proposes a close-fitting assistive suit, called e.z.UP®, with a passive actuation mechanism composed of an adjustable structure. The suit can adequately assist the back and arm muscles of a user with the proposed layout of an arm assistive belt and a two-layer structure, respectively. With its lightweight characteristic (i.e., weighing 0.75 kg only), the proposed suit is portable and easy to wear without additional burden. By using the averaged Japanese body data, a simulation was conducted based on a human body model wearing our proposed suit to evaluate the layout of the arm assistive belt. The simulation results prove that the proposed suit can adequately assist the user’s arm muscles based on the user’s motion. An experiment involving the measurement of muscle activities is also implemented with seven young subjects and seven middle-aged subjects to evaluate the arm assistive belt and the two-layer structure. The experimental results reveal that the proposed suit can successfully and appropriately assist both the arm and back muscles simultaneously.
In this paper, we propose a wearable robot arm with consideration of weight and usability. Based on the features of existing wearable robot arms, we focused on the issues of weight and usability. The behavior of human hands during physical work can be divided into two phases. In the first, the shoulder and the elbow joints move before commencing the task by using the hands. In the second, the wrist joints move during the actual work. We found that these features can be applied to wearable robot arms. Consequently, we proposed hybrid actuation system (HAS) with a combination of two types of joints. In this study, HAS is implemented into the prototype wearable robot arm, assist oriented arm (AOA). To verify the validity of the proposed system, we implemented three types of robot arms (PasAct, Act, 6DOF) using simulation to compare the weight, working efficiency, and usability. Furthermore, we compared these simulation models with AOA for evaluation.
This study proposes a motion-assist arm that can accurately support the positioning of a human upper limb. The motion-assist arm is a three-degree-of-freedom (DOF) planer under-actuated robotic arm with a 1-DOF passive joint that can be driven by an human. A control method for the robot arm is as follows. First, when the human moves an output point of the arm manually, the passive joint is rotated with the movement of the output point. Then, for accurate positioning of the output point on a target path, the actuated joints are controlled according to the displacement of the passive joint. Based on the above method, the human can adjust the velocity of the output point deliberately while its position is accurately corrected by the actuated joints. To confirm its effectiveness, the authors conducted tests to assist the human’s upper limb movement along straight target paths, a square path, and free curves paths such as italic letters with the proposed robot arm prototype. From the results of the tests, the authors confirmed that the proposed robot arm can accurately position the upper limb of the human on the target paths while the human intentionally moves the upper limb. It is expected that the proposed arm will be used for rehabilitation because it can aid patients to move their arms correctly. In addition, the proposed arm will enable any human to achieve complex work easily.
In this study, a feedback device of force and temperature sensations for myoelectric prosthetic hand users was developed. When a prosthetic hand user grasps an object using the myoelectric prosthetic hand, the stiffness and temperature of the object are measured using sensors attached to the prosthetic hand, and force and temperature sensations are fed back to the upper arm of the user. From the experimental evaluation of the feedback device, the influence of temperature change on force sensations was confirmed. Therefore, to feed back the same force sensation to the user even if a temperature change has occurred, compensation functions were derived using the maximum likelihood method. On the basis of paired comparison, verification experiments were conducted, which demonstrated the effectiveness of the derived compensation functions.
Globally, lower back pain is a serious problem. For workers, it not only causes health problems but also has social and economic influences. Lower back pain could be attributed to burden on people’s waists when they handle heavy objects. Ministry of Health, Labour and Welfare in Japan recommends squat lifting, a method of lifting objects with a smaller burden on waist. On the other hand, squat lifting is not commonly used because it requires deep bending of knees to lift an object, leading to a larger work load. Therefore, a leaf-spring type power assist suit for legs has been developed in order to assist squat lifting. However, if the fixing performance of a preceding machine was enhanced, a leaf spring could impede the bending of knee joints during gait motion. In the present study, we developed a power assist suit for legs using a slide mechanism. A leaf spring was chosen so as to meet a target assist torque determined by a motion analysis for lifting objects. In addition, we made a prototype machine with slide mechanism. EMG measurement of the thigh muscle during lifting actions using the prototype machine showed a decrease of up to 46%. It was also confirmed that a machine with slide mechanism could realize a more natural gait than a machine without it.
The Community-centric System (CcS) research center was established in 2015 to address social problems such as elderly care, social rehabilitation, and information support in the event of a disaster. This research center combines various perspectives including sensing technology, robotics, information and communication, health and welfare, urban environments, and so on. This paper provides a brief preface on the history and activities of the research center.
In our daily conversation, we obtain considerable information from our interlocutor’s non-verbal behaviors, such as gaze and gestures. Several studies have shown that nonverbal messages are prominent factors in smoothing the process of human-robot interaction. Our previous studies have shown that not only a robot’s appearance but also its gestures, tone, and other nonverbal factors influence a person’s impression of it. The paper presented an analysis of the impressions made when human motions are implemented on a humanoid robot, and experiments were conducted to evaluate impressions made by robot expressions to analyze the sensations. The results showed the relation between robot expression patterns and human preferences. To further investigate biofeedback elicited by different robot styles of expression, a scenario-based experiment was done. The results revealed that people’s emotions can definitely be affected by robot behavior, and the robot’s way of expressing itself is what most influences whether or not it is perceived as friendly. The results show that it is potentially useful to combine our concept into a robot system to meet individual needs.
Recently, there has been an increase in the importance of community-centric systems as a new paradigm to enhance the quality of community (QOC). Social media plays an important role in creating, sharing, and exchanging information within a community. However, assistive technologies should be developed from human-centric and community-centric points of view to realize such information support. In this paper, we discuss the use of smart devices interlocked robot partners for interactive information support similar to concierge services in hotels. The interactive information support system is composed of two main subsystems, namely robot partners and informationally structured space servers. A robot partner performs communication and interaction with people through voice recognition and gesture recognition in addition to the use of touch interfaces. The informationally structured space server receives the measurement data of human motions and personal information containing human requests and preferences from the robot partner. Next, the informationally structured space server selects and recommends shops, restaurants, and sightseeing spots to the guests and visitors through utterance and display by the robot partner. First, we explain the concept of the informationally structured space to connect a person with sensory information and propose the overall architecture of the informationally structured space. Next, we explain how to provide information support using smart devices interlocked robot partners based on the informationally structured space. In addition, we describe several social experiments on interactive information support at hotels. Finally, we discuss the effectiveness of the proposed system and the future direction of community-centric systems.
This paper proposes a method for the semi-automatic generation of a dataset for deep neural networks to perform end-to-end object detection and classification from images, which is expected to be applied to domestic service robots. In the proposed method, the background image of the floor or furniture is first captured. Subsequently, objects are captured from various viewpoints. Then, the background image and the object images are composited by the system (software) to generate images of the virtual scenes expected to be encountered by the robot. At this point, the annotation files, which will be used as teaching signals by the deep neural network, are automatically generated, as the region and category of the object composited with the background image are known. This reduces the human workload for dataset generation. Experiment results showed that the proposed method reduced the time taken to generate a data unit from 167 s, when performed manually, to 0.58 s, i.e., by a factor of approximately 1/287. The dataset generated using the proposed method was used to train a deep neural network, which was then applied to a domestic service robot for evaluation. The robot was entered into the World Robot Challenge, in which, out of ten trials, it succeeded in touching the target object eight times and grasping it four times.
In this research, we have considered a mobile robot that can start to move by utilizing rotations of the two arms. This robot consists of two rotating arms and a body. Additionally, it has a device that can fix the body to a platform constructed on a certain wall or floor. In our previous study, we investigated the performance of a robot that could move in a planar space without friction or gravity through several numerical simulations. In this study, we investigate the performance of the mobile robot under a gravity environment. While the body is fixed to a starting platform, the mobile robot can store kinetic energy by rotating its arms. When the body is released from the starting platform, the mobile robot hops to the subsequent platform. We consider a scheme to control the hopping direction of the mobile robot and a scheme to reduce the collision impact against the subsequent platform. Thereafter, we verify the feasibility of the proposed schemes through numerical simulations.