Journal of Robotics and Mechatronics
Online ISSN : 1883-8049
Print ISSN : 0915-3942
ISSN-L : 0915-3942
Volume 29, Issue 2
Displaying 1-16 of 16 articles from this issue
Review on Current Status and Future Trends on Robot Vision Technology
  • Manabu Hashimoto, Yukiyasu Domae, Shun’ichi Kaneko
    Article type: Review
    2017 Volume 29 Issue 2 Pages 275-286
    Published: April 20, 2017
    Released on J-STAGE: November 20, 2018
    JOURNAL OPEN ACCESS

    This paper reviews the current status and future trends in robot vision technology. Centering on the core technology of 3-dimensional (3D) object recognition, we describe 3D sensors used to acquire point cloud data and the representative data structures. From the viewpoint of practical robot vision, we review the performance requirements and research trends of important technologies in 3D local features and the reference frames for model-based object recognition developed to address these requirements. Regarding the latest development examples of robot vision technology, we introduce the important technologies according to purpose such as high accuracy or ease-of-use. Then, we describe, as an application example for a new area, a study of general-object recognition based on the concept of affordance. In the area of practical factory applications, we present examples of system development in areas attracting recent attention, including the recognition of parts in cluttered piles and classification of randomly stacked products. Finally, we offer our views on the future prospects of and trends in robot vision.

    Download PDF (5739K)
Special Issue on Innovative Technology for Nursing Care and Nosotrophy
  • Taketoshi Mori, Yo Kobayashi
    Article type: Editorial
    2017 Volume 29 Issue 2 Pages 287
    Published: April 20, 2017
    Released on J-STAGE: November 20, 2018
    JOURNAL OPEN ACCESS

    As life expectancy has become longer, the number of those who have some handicaps or diseases as well as the families, caregivers and medical staff who support them has been also increasing. While modern medical science has rapidly advanced by adopting the innovation of engineering technology especially in the fields of mechanical and electric engineering, the support for nursing, care and assistance by making use of engineering technology has only just begun.

    This special issue on “Nursing Engineering,” a combination of nursing and engineering, covers a wide range of themes such as the measuring equipment characterized by non-invasiveness, unconstraint and real-time for the purpose of helping patients and healthcare professionals and the development of the related technology; the development of the technology for the equipment to support recuperation, rehabilitation or convalescent life of patients; and active introduction of information technology and user interface technique into nursing study and its case studies. “Nursing Engineering” is expected to play increasingly important role to support medical treatment and everyday life of patients along with the highly professional medical staff by making practical use of the technology of robotics and mechatronics and incorporating rehabilitation science, welfare engineering and technology for assistance.

    Download PDF (120K)
  • Hiroyuki Maeda, Miho Shogenji, Tetsuyou Watanabe
    Article type: Paper
    2017 Volume 29 Issue 2 Pages 288-298
    Published: April 20, 2017
    Released on J-STAGE: November 20, 2018
    JOURNAL OPEN ACCESS

    In this study, we present a method for the evaluation of the perturbation of walking balance resulting from additional task loads. For the evaluation, we compared pose differences between normal walking and walking under multi-task conditions. We employed an inertial measurement unit (IMU) sensor attached to the hips and the shoulders of subjects, through which we determined the orientation of the shoulder with respect to the hip. We focused on the variation width of orientation and utilized the increase rate from the variation width for normal walking to that for walking under multi-task conditions. By analyzing the correlation between the rate and the evaluation factor determined by employing a conventional fall-risk assessment method, which is related to cognitive and motor functions, we defined a walking balance evaluation criterion. The results indicate that calculation and auditory tasks affect the walking balance.

    Download PDF (1006K)
  • Hideki Toda, Takeshi Matsumoto, Hiroya Takeuchi
    Article type: Paper
    2017 Volume 29 Issue 2 Pages 299-305
    Published: April 20, 2017
    Released on J-STAGE: November 20, 2018
    JOURNAL OPEN ACCESS

    An ankle joint pushing system that absorbs the horizontal movement of the talus is proposed, and the effects of the horizontal movement absorbing mechanism in the system are examined. The bending of the ankle joint is an important medical treatment that physical therapists (PTs) utilize to help their patients recover their ability to walk and to prevent contracture. However, since the ankle treatment requires a large amount of force (nearly equal to the subject’s weight) plus precise angle and power control, manual treatment by PTs has not been replaced by mechanical treatment systems. In order to realize the mechanization of the ankle joint pushing treatment, the system developed here uses two features. (1) The proposed device fixes the ankle joint position correctly by making contact with the Achilles tendon, the back of the calf, and the hip. (2) The bucket rotation center shifts its position horizontally with the movement of the talus when the foot is pushed back. This mechanism can push the toes or any part of the foot strongly and stably without causing the patient any pain. It can also stretch the biceps femoris and gastrocnemius muscles simultaneously, which could previously only be done by a physical therapist. During 73 N stretching treatment tests done on four subjects, 22.2 mm (S.D. 6.4 mm) horizontal (ventral direction) movements of the ankle joint were observed due to the movement of the talus, and our proposed device mechanism successfully suppressed the movement to a 1.8 mm (S.D. 1.3 mm) horizontal (ventral) movement in the same 73 N stretching treatment.

    Download PDF (1631K)
  • Shuhei Noyori, Gojiro Nakagami, Hiroshi Noguchi, Koichi Yabunaka, Take ...
    Article type: Paper
    2017 Volume 29 Issue 2 Pages 306-316
    Published: April 20, 2017
    Released on J-STAGE: November 20, 2018
    JOURNAL OPEN ACCESS

    The insertion of a peripheral intravenous catheter is often required by patients during a hospital stay. However, approximately 30% of intravenous catheters experience catheter failure. Using ultrasonography to observe the depth and diameter of a vein and to thereby perform catheter site selection is a promising procedure to prevent catheter failure. Nevertheless, it is difficult to perform this procedure, as it is associated with complicated maneuvering in which a nurse simultaneously manipulates an ultrasonographic probe, assesses veins, and inserts a catheter. In this study, a new image displaying system that consists of a camera, head-mounted display, and software is proposed. The newly developed image-processing program detects the fingertip of a user, and the system displays the reconstructed ultrasonographic image at any cross-sectional plane as indicated by the user’s finger. Additionally, veins are superimposed on the ultrasonographic image, and the depth and diameter of veins are also displayed on the image. The newly developed image-processing algorithm detects markers and fingertip in the images captured by the head-mounted camera by robustly detecting the fingertip. This aids in realizing a new ultrasonographic image displaying system. This system is used to increase the success rate of vein detection by nurses in a study of volunteers.

    Download PDF (2143K)
  • Jörg Güttler, Muhammad Karim, Christos Georgoulas, Thomas Bock
    Article type: Paper
    2017 Volume 29 Issue 2 Pages 317-326
    Published: April 20, 2017
    Released on J-STAGE: November 20, 2018
    JOURNAL OPEN ACCESS

    In this paper, the authors describe the cuffless blood pressure meter prototype, which is targeting at potential long-term automated blood pressure screening. By the proposed cuffless approach, mental stress is reduced, which increases the reliability of measurement. By using a wireless communication medium to transmit data, care staff can store and access readings more easily. The proposed system was developed using low-cost off-the-shelf parts such as Arduino/Wattuino Uno boards and single-board computers. This enables thereby an unobtrusive implementation of such a compact system into furniture, for example. Its intuitive measurement enables care staff to devote more attention to the patient and less to the blood pressure measurement. The proposed system is described in its hard- and software functionality. Furthermore, experimental results confirm the proposed system’s reliability.

    Download PDF (1384K)
  • Yutaka Matsuura, Hieyong Jeong, Kenji Yamada, Kenji Watabe, Kayo Yoshi ...
    Article type: Paper
    2017 Volume 29 Issue 2 Pages 327-337
    Published: April 20, 2017
    Released on J-STAGE: November 20, 2018
    JOURNAL OPEN ACCESS

    Background and purpose: It has been considered that sleep-disordered breathing disorders, such as sleep apnea syndrome (SAS), cause an increase in the risk of cardiovascular disease or traffic accident risk, and thus early detection of SAS is important. It has been also important for medical workers at clinical sites to quantitatively evaluate the respiratory condition of hospitalized patients who are asleep in a simple method. A noncontact-type system was proposed to monitor the respiratory condition of sleeping patients and minimized patient-related stress such that medical workers could use the system for SAS screening and perform a preliminary check prior to definite diagnosis.

    Method: The system included Microsoft Kinect™ for windows® (Kinect), a tripod, and a PC. A depth sensor of Kinect was used to measure movement in the thorax motion. Data obtained from periodic waveforms were divided with the intervals of 1 min, and the number of peaks was used to obtain the respiratory rate. Additionally, a frequency analysis was performed to calculate the respiratory frequency from a frequency at which the maximum amplitude was observed. In Experiment 1), a METI-man® PatientSimulator (CAE healthcare) (simulator) was used to study the respiratory rate and frequency calculated from the Kinect data by gradually changing the designated respiratory rate. In Experiment 2), the respiratory condition of four sleeping subjects was monitored to calculate their respiratory rate and frequencies. Furthermore, a video camera was used to confirm periodic waveforms and spectrum features of body movements during sleep.

    Results: In Experiment 1), the results indicated that both the respiratory rate and frequency corresponded to the designated respiratory rate in each time zone. In Experiment 2), the results indicated that the respiratory rate of examines 1, 2, 3, and 4 corresponded to 12.79±2.44 times/min (average ± standard deviation), 16.46±4.33 times/min, 28.24±2.79 times/min, and 13.05±2.64 times/min, respectively. The findings also indicated that the frequency of examines 1, 2, 3, and 4 corresponded to 0.20±0.04 Hz, 0.26±0.06 Hz, 0.45±0.12 Hz, and 0.22±0.06 Hz, respectively. The periodic waveforms and amplitude spectra were enhanced with respect to body movements although regular waveform data were obtained after the body movement occurred.

    Discussions: The results indicated that body movement and posture temporarily affected monitoring of the system. However, the findings also revealed that it was possible to calculate the respiratory rate and frequency, and thus it was considered that the system was useful for monitoring the respiration confirm with the non-contact or SAS screening of patients in clinical site.

    Download PDF (2636K)
  • Masaru Kawakami, Shogo Toba, Kohei Fukuda, Shinya Hori, Yuki Abe, Koic ...
    Article type: Paper
    2017 Volume 29 Issue 2 Pages 338-345
    Published: April 20, 2017
    Released on J-STAGE: November 20, 2018
    JOURNAL OPEN ACCESS

    Fall accident prevention is one of the most important issues in elderly care settings. To prevent an accident, it is necessary to notify caregivers if the elderly person is getting out of bed. We have previously developed a posture discrimination system based on body motions. Herein, we propose a discrimination method by using machine learning to improve the performance of the system. A purpose of this study is to evaluate the proposed method. Elderly people in a nursing home were chosen as subjects in this study. We analyzed the body motion data during bed rest and bed exit of the subjects using the proposed method. These results suggest that it is effective.

    Download PDF (1186K)
  • Yutaka Murakami, Yuko Ohno, Miki Nishimura, Michiko Kido, Kenji Yamada
    Article type: Paper
    2017 Volume 29 Issue 2 Pages 346-352
    Published: April 20, 2017
    Released on J-STAGE: November 20, 2018
    JOURNAL OPEN ACCESS

    Peripheral intravenous (IV) line placement is one of the most invasive and painful procedures performed by nurses. Although it is a common nursing procedure, sufficient and effective skill training is necessary before nurses, especially new nurses, work with patients. Vascular access imaging devices (VAIDs) have been developed and put into use in hospitals. Many studies have been conducted to evaluate the effectiveness of the device in clinical settings such as in neonatal care, pediatric care, emergency care, etc., but the effectiveness of the device in training nurses who have just recently graduated has rarely been reported, especially in Japan. In this paper, we report on a quasi-experimental study that evaluated the effectiveness of the VAID for training recent nursing school graduates to successfully perform IV line placement. Eleven newly registered nurses participated in this study. Their preparations were video recorded for analysis. Students’ t-tests were used to compare time and success rates of IV placement with VAID assistance and without it. Furthermore, subjects reported their feelings and the self-evaluation related to VAID use by answering a questionnaire, and their responses were analyzed. The results showed no significant change in the length of time needed nor in the success of peripheral IV line placement when the VAID was used; however, nurses indicated the VAID did help them in deciding where the IV should be inserted. These results suggest that the use of the VAID could be clinically meaningful as an IV training tool and that it could reduce the time needed to select venipuncture sites.

    Download PDF (548K)
  • Yoshimi Ui, Yutaka Akiba, Shohei Sugano, Ryosuke Imai, Ken Tomiyama
    Article type: Development Report
    2017 Volume 29 Issue 2 Pages 353-363
    Published: April 20, 2017
    Released on J-STAGE: November 20, 2018
    JOURNAL OPEN ACCESS

    In this study, we propose an excretion detection system, Lifi, which does not require sensors inside diapers, and we verify its capabilities. It consists of a sheet with strategically placed air intakes, a set of gas sensors, and a processing unit with a newly developed excretion detection algorithm. The gas sensor detects chemicals with odor in the excrement, such as hydrogen sulfide and urea. The time-series data from the gas sensor was used for the detection of not only excretion, but also of the presence/absence of the cared person on the bed. We examined two algorithms, one with a simple threshold and another based on the clustering of sensor data, obtained using the k-means method. The results from both algorithms were satisfactory and similar, once the algorithms were customized for each cared person. However, we adopted the clustering algorithm because it possesses a higher level of flexibility that can be explored and exploited. Lifi was conceived from an overwhelming and serious desire of caretakers to discover the excretion of bed-ridden cared persons, without opening their diapers. We believe that Lifi, along with the clustering algorithm, can help caretakers in this regard.

    Download PDF (877K)
Regular Papers
  • Zheng Chai, Takafumi Matsumaru
    Article type: Paper
    2017 Volume 29 Issue 2 Pages 365-380
    Published: April 20, 2017
    Released on J-STAGE: November 20, 2018
    JOURNAL OPEN ACCESS

    This paper proposes the ORB-SHOT SLAM or OS-SLAM, which is a novel method of 3D loop closing for trajectory correction of RGB-D visual SLAM. We obtain point clouds from RGB-D sensors such as Kinect or Xtion, and we use 3D SHOT descriptors to describe the ORB corners. Then, we train an offline 3D vocabulary that contains more than 600,000 words by using two million 3D descriptors based on a large number of images from a public dataset provided by TUM. We convert new images to bag-of-visual-words (BoVW) vectors and push these vectors into an incremental database. We query the database for new images to detect the corresponding 3D loop candidates, and compute similarity scores between the new image and each corresponding 3D loop candidate. After detecting 2D loop closures using ORB-SLAM2 system, we accept those loop closures that are also included in the 3D loop candidates, and we assign them corresponding weights according to the scores stored previously. In the final graph-based optimization, we create edges with different weights for loop closures and correct the trajectory by solving a nonlinear least-squares optimization problem. We compare our results with several state-of-the-art systems such as ORB-SLAM2 and RGB-D SLAM by using the TUM public RGB-D dataset. We find that accurate loop closures and suitable weights reduce the error on trajectory estimation more effectively than other systems. The performance of ORB-SHOT SLAM is demonstrated by 3D reconstruction application.

    Download PDF (2811K)
  • Zdeněk Materna, Michal Španěl, Marcus Mast, Vítězslav Beran, Florian W ...
    Article type: Paper
    2017 Volume 29 Issue 2 Pages 381-394
    Published: April 20, 2017
    Released on J-STAGE: November 20, 2018
    JOURNAL OPEN ACCESS

    Despite remarkable progress of service robotics in recent years, it seems that a fully autonomous robot which would be able to solve everyday household tasks in a safe and reliable manner is still unachievable. Under certain circumstances, a robot’s abilities might be supported by a remote operator. In order to allow such support, we present a user interface for a semi-autonomous assistive robot allowing a non-expert user to quickly asses the situation on a remote site and carry out subtasks which cannot be finished automatically. The user interface is based on a mixed reality 3D environment and fused sensor data, which provides a high level of situational and spatial awareness for teleoperation as well as for telemanipulation. Robot control is based on low-cost commodity hardware, optionally including a 3D mouse and stereoscopic display. The user interface was developed in a human-centered design process and continuously improved based on the results of five evaluations with a total of 81 novice users.

    Download PDF (3192K)
  • Kyo Kutsuzawa, Sho Sakaino, Toshiaki Tsuji
    Article type: Paper
    2017 Volume 29 Issue 2 Pages 395-405
    Published: April 20, 2017
    Released on J-STAGE: November 20, 2018
    JOURNAL OPEN ACCESS

    Robotic tool use is one of various approaches for actualizing versatility of robots, and is thus the focus of many studies. However, selection of the controllers for tool use and how to design them remains indeterminate. This paper addresses the task of drawing a circle with a compass as an example of tool use. This task mandates to deal with complex contact at multiple points and needs to educe functions of the compass to draw a circle accurately. This paper demonstrates the implementation and corresponding method of compass controller design. The method of designing the controller for the compass entails decomposing the usage of the compass into semantic units and subsequently defining a coordinate system and fabricating the controller via mapping of the semantic units to axes. The implementation of a controller for compass use indicates that the ability of the compass to accurately draw a circle is educed via mechanical constraints of the compass. We validated the implemented controller by drawing a circle and comparing the result to a circle drawn using a pencil.

    Download PDF (936K)
  • Hiroshi Takahashi
    Article type: Paper
    2017 Volume 29 Issue 2 Pages 406-418
    Published: April 20, 2017
    Released on J-STAGE: November 20, 2018
    JOURNAL OPEN ACCESS

    This paper reports on a study on the intelligent cooperation control system with human operators. The remote operation of a robotic arm by a human operator is considered as a simplified resilient system. In the experiments, subjects operated a robotic arm to carry out a simple task, while observing it through a monitor. The display of the monitor suddenly disappeared, and the subject continued the task only by using auditory information. By analyzing the relationship between task performances and types of auditory information through a mathematico-statistical method, it was found that not only auditory information related to the position but also the auditory information to ideate the position of the robotic arm was effective for task completion.

    Download PDF (1480K)
  • Jorge David Figueroa Heredia, Jose Ildefonso U. Rubrico, Shouhei Shira ...
    Article type: Paper
    2017 Volume 29 Issue 2 Pages 419-433
    Published: April 20, 2017
    Released on J-STAGE: November 20, 2018
    JOURNAL OPEN ACCESS

    In this study, we present a novel framework to address the problem of teaching manipulation tasks performed by a single human to a set of multiple small robots in a short period. First, we focused on classifying the manipulation style used during a human-performed task. An allocator process is proposed to determine the type and number of robots to be taught based on the capabilities of available robots. Then, according to the detected task requirements, robot behaviors are generated to create robot programs by splitting human demonstration data. Small robots were used to evaluate our approach in four defined tasks that were taught by a single human. Experiments demonstrated the efficiency of the method to classify and judge whether the division of a task is necessary or not. Moreover, robot programs were generated for manipulating selected objects either individually or in a cooperative manner.

    Download PDF (4371K)
  • Tatsuya Fujii, Norihiro Koizumi, Atsushi Kayasuga, Dongjun Lee, Hiroyu ...
    Article type: Paper
    2017 Volume 29 Issue 2 Pages 434-446
    Published: April 20, 2017
    Released on J-STAGE: November 20, 2018
    JOURNAL OPEN ACCESS

    High intensity focused ultrasound (HIFU) is potentially useful for treating stones and/or tumors. With respect to HIFU therapy, it is difficult to focus HIFU on the focal lesion due to respiratory organ motion, and this increases the risk of damaging the surrounding healthy tissues around the target focal lesion. Thus, this study proposes a method to cope with the fore-mentioned problem involving tracking and following the respiratory organ motion via a visual feedback and a prediction model for respiratory organ motion to realize highly accurate servoing performance for focal lesions. The prediction model is continuously updated based on the latest organ motion data. The results indicate that respiratory kidney motion of two healthy subjects is successfully tracked and followed with an accuracy of 0.88 mm by the proposed method and the constructed system.

    Download PDF (3994K)
feedback
Top