In this paper, we present a learning-based method for estimating the state of cloth during putting an arm through a sleeve of shirt. To capture the dynamic change of cloth during manipulation, we take an approach that uses optical flow extracted from image streams. We adopt a deep neural network for optical flow extraction, then extend that the network to output a cloth state. To evaluate the accuracy of state estimation, we conducted two experiments: (i) putting a cylindrical cloth through an L-shaped stand, and (ii) putting a long-sleeved shirt through an arm of a doll. The experimental results indicated that our method is useful to estimate cloth state more accurately than conventional methods.
With the increase of the elderly population, the assistive technology for their walking has been needed. Although some walking support devices have shown positive effects, they still have problems for easy to suit it by oneself. In this study, it is aimed to develop a new walking assist device which can be used for hemiplegia and has feasibility of adjustment for assisting power without any actuators. As a first step, a motion transmission mechanism between lower limbs has been proposed and implemented as a prototype. The basic function and effects of motion transmission from unaffected side to affected side without any control are verified through the experiments using the prototype with a simple biped leg model. In addition, it is confirmed that the variation of attachment positions of transmission mechanism on the bipedal model offers the adjustability of transmitted torque. And, it is verified that the attachment of the proposed mechanism on the biped model has the positive effects on the periodicity and stability of the gait motion. As a future work, it is needed to redesign a motion transmission mechanism which suits for the actual human leg and its motion with the safety and the consideration for the load to the unaffected limb.
With the shortage of labor in construction sites, in previous work, we have developed an automatic construction material handling robot. However, for tasks such as cart latching, the target is not fixed to a specific location, and the performance of relative pose estimation may be poor. In this study, we propose a method for recognizing a cart in an arbitrary posture and a hybrid control method for the cart latching. The presented approach has been evaluated in real construction sites.
In recent years, wearable interfaces have attracted research attention. As wearing masks has become common worldwide after the appearance of the coronavirus disease 2019, we devised a system to measure facial information from a mask, focusing on facial expressions in this study. Specifically, the expansion and contraction of the mask was measured to classify facial expressions. We conducted classification experiments under various conditions, achieving high classification accuracy, especially for facial expressions in the non-woven fabric mask to which a conductive thread was attached. We found that conductive threads are affordable and easy to sew into masks, and they can be used to construct a wearable interface.
Currently, as measures against environmental destruction, an agricultural method called synecocultureTM has been received attention. However, since multiple types of plants grow densely in this method, conventional machines and robots can't intervene. Therefore, work efficiency is low. To improve work efficiency, we developed a robot with a wheel and linear mechanism. The wheel mechanism can move on uneven terrain, and the linear mechanism with two orthogonal axes can adjust tool position during task. In the field experiment, the robot moved on the field, and succeeded in harvesting and weeding by operating the linear mechanism based on the camera image.
This paper presents force feedback control of an encountered-type haptic interface using MR (Magneto-Rheological) fluid based on biaxial forces measured by a surgical instrument with a single strain area. The haptic interface is developed for a surgical simulator. Therefore, the interface is supposed to display forces pressing or cutting the biological tissues in a surgical operation. This paper focuses on retraction operation by using a suction tube that is a fundamental operation to provide a large area of visibility to the operator. The experiments show that the system can reproduce reference retraction force regardless of the attitude of the instrument.