-
Soichi KANO, Yoichi MASUDA, Masato ISHIKAWA
Session ID: 1A1-N02
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This study aims to generate leg motion with simple reflexes to avoid fall down for a quadruped walking musculoskeletal robot. In this paper, we aims to visualize and understand the conditions that lead to falls through robot experiments. As a result of some experiments, we found that our robot falls over if it cannot kick the ground well. In the future, we will implement reflective rules to the robot to prevent from reaching a situation that leads to a fall. The goal of our study is to develop a robot that can walk in the bad ground situation using only a simple control law.
View full abstract
-
Yoichi MASUDA, Takahiro GOTO, Keisuke NANIWA, Daisuke NAKANISHI, Daisu ...
Session ID: 1A1-N03
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This paper reports on the mechanical reproduction of the sensorimotor network in animals, which consists of muscles, receptors, and nerves. The network consists of pneumatic artificial muscles, receptor devices, and nerve devices, and the reflex path can be switched by changing the weights (regulator pressure in the channel) of the network. Experiments show that the sensory-motor network embedded in the robot leg and the body dynamics can produce walking-like leg movements.
View full abstract
-
Daisuke NAKANISHI, Takahiro GOTO, Daisuke URA, Yasuhiro SUGIMOTO, Shur ...
Session ID: 1A1-N04
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Creating autonomous robots akin to animals is the ultimate goal in biomimetics fields. However, the biggest challenge lies in embedding numerous controllers to achieve multiple feedback loops, like in the animal body. This study proposes a force-sensitive pneumatic valve, an easy-to-manufacture valve that is a simple control element for a pneumatic system. We fabricated the valve using commercially available urethane tubes and 3D-printed parts and demonstrated its efficacy. With the advancement in such valve manufacturing technology, constructing a distributed control system with 102 to 103 control elements has become effortless.
View full abstract
-
Michiru SOBUE, Soma KATO, Izumi MIZOGUCHI, Hiroyuki KAJIMOTO
Session ID: 1A1-N05
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In everyday tasks, we employ not just the finger pad but also the sides and hemispherical ends of the fingers in object manipulation. Hence, ensuring tactile perception spans the entire fingertip is considered crucial for advancing teleoperation. To achieve this objective, understanding the spatial acuity distribution of the fingertip becomes paramount. Although it's commonly acknowledged that the tactile acuity varies across the fingertip, from the end to the pad, the specifics of this resolution shift and the resolution on the finger's side remain unknown. Through meticulous measurements, we discovered that the spatial acuity at the index fingertip undergoes almost linear changes, with a significant decline in tactile acuity on the side of the finger.
View full abstract
-
Yuma AKIBA, Shota NAKAYAMA, Keigo USHIYAMA, Izumi MIZOGUCHI, Hiroyuki ...
Session ID: 1A1-N06
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This research introduces an approach for delivering tactile sensations to the forehead, specifically targeting the low-frequency band using a compact linear resonant actuator. Utilizing the absence of Pacinian corpuscles, which typically detect vibrations around 200 Hz on the forehead, we employed amplitude modulation to compensate for the challenge of the limited frequency range of linear resonant actuators. The amplitude modulation was achieved by around 200-Hz carrier wave around the actuator’s resonant frequency. Through two experiments, we evaluated the effectiveness of this modulation in conveying low-frequency vibrations (Experiment 1) and assessed the perceptual quality of the vibrations (Experiment 2) experienced on the forehead. Our findings indicate that participants could clearly feel the original low-frequency vibration on the forehead compared to other body locations.
View full abstract
-
Daiki ISHIDA, Kazuhiro SHIMONOMURA
Session ID: 1A1-N07
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
A mechano-optical material based tactile sensor using a camera and a strain-sensing polymer, which is a polymer material that changes its optical reflection characteristics depending on the magnitude of strain, was fabricated. Pressure distribution on the sensor surface was estimated with high spatial resolution by converting the hue of the output image into pressure. In addtion, we evaluated the validity of the pressure distribution estimation method by the tactile image sensor. The estimated pressure distribution data was utilized for grasping control of objects of various shapes by the two-fingered robot hand.
View full abstract
-
Satoshi TSUJI
Session ID: 1A1-N08
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Collaborative robots do not require safety fences and can work with humans in the same place. They are expected to save workspace and improve work efficiency. Proximity and tactile sensors on a robot are useful for ensuring the safety of robot operations. In this study, we propose a string-like Time of Flight (ToF) and self-capacitance combined proximity and contact sensor. The string-like sensor is expected to wrap around the robot surface for easy mounting. The sensor consists of multiple ToF sensors and a self-capacitance sensor. The ToF sensor can detect the distance to an object by measuring the reflection time of the irradiated light. The self-capacitance sensor can detect an object in close proximity range, and also detect a human touch. By combining ToF sensors with a self-capacitance sensor, the sensor can detect an object at non-contact with minimal blind spots and can also detect a contact.
View full abstract
-
Syukin TEI, Junpei HAGIWARA, Riki IWADOU, Yuya OTAWARA, Junji SONE
Session ID: 1A1-N10
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Evolution of XR technology and Metaverse in recently, flexible piezoelectric devices are required for virtual reality and communication robots. We are developing pre-charge type tactile device. And we extended to multi-points actuation. This device is very thin and developed multi-point control circuit for make flexible metaverse application.
View full abstract
-
-Modeling of sensor sprayed on flexible substrate-
Keito ASHIYA, Takayuki TAKAHASHI
Session ID: 1A1-O02
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
The authors are developing a splay coated 2D tactile sensor, ScoTacS, for robots. The output signal of 2D sensor contains undesiravle vibration components due to the vibration of substrate. In this paper, we try to remove the vibration components by using a sensor with a flexible substrate. The authors also investigate high accurate conversion method from distributed model to concentrated constant model.
View full abstract
-
-Comparison of Pressure Sensor and Strain Gauge-
Ryusei TSUJIMOTO, Koji SHIBUYA
Session ID: 1A1-O03
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
For legged robots, detecting the collision with the ground is crucial for their stabel locomotion. In this report, we propose a new flexible balloon-shaped sensor, which can be attached to legged robots’foot. The sensor consists of a balloon made from silicone rubber, a tube. The pressure sensor and the strain gauge are the candidates for the sensing element. When the toe touches the ground, the balloon deformed, then the air pressure inside the balloon increases, which can be measured by a pressure sensor. On the other hand, the deformation can be directly measured by a strain gauge. We conducted and experiment and compared the delay time of these sensing elements. As a result, we confirmed that the strain gauge is better in terms of the delay time. However, we also considered that the pressure sensor could detect the collision from any directions.
View full abstract
-
Akito IMANISHI, Satoshi SAGA
Session ID: 1A1-O04
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Most virtual reality (VR) sports simulators reproduce visual and auditory senses, and few of them reproduce haptic sensations. Therefore, we focused on the catching motion of baseball and tried to add haptic sensation to VR sports. To improve the reality of the sensation of catching a ball, we proposed a device that uses a combination of brake mechanism to limit hand movement, and electric or vibration stimulation. As a result of the experiment, we were able to reproduce an impulsive force comparable to that of actual ball catching. On the other hand, no significant differences were found in the similarity between each condition and the actual ball catching.
View full abstract
-
Ohga NOMURA, Hidetoshi TAKAHASHI
Session ID: 1A1-O05
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This paper reports on a triaxial force plate using two inclined facets of a prism and sampling moire method. The ground reaction force is one of the most important factors for evaluating biomechanics and force plates are commonly used for its measurement. Conventional force plates utilize strain gauges, but there are challenges in developing small force plates. While force plates with non-contact type sensors have been developed, these are not suitable for multiaxial measurement. On the other hand, the sampling moire (SM) method has garnered attention as a high-resolution in-plane measurement technique. The proposed force plate comprises a plate, spring structure, 2-D grating, prism and camera. Three directional displacements are measured from images that have two inclined grating images before and after displacement by the SM method. In this study, we designed and fabricated a 25 mm × 25 mm force plate element. The fabricated force plate independently enabled three-axis measurements.
View full abstract
-
Haruki KATO, Kazuya SASE, Hikaru NAGANO, Masashi KONYO
Session ID: 1A1-O06
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Humans use distributed tactile information such as pressure distribution on the finger belly for grasping and manipulating objects and for identifying materials. The present study reproduces realistic tactile sensation by generating strain energy density (SED) distribution between the epidermis and dermis by suction pressure control. In this study, we aim to reproduce realistic tactile sensation by generating strain energy density distribution between epidermis and dermis using suction pressure control. The control method is based on a real-time finger deformation simulation, but it is necessary to simplify the deformation calculation and the finger model used. To investigate the validity of the real-time simulation, we created a finger model using mesh generation software, compared the results of real-time simulation with the results of analysis using finite element analysis software, and compared the results of finite element analysis with experimental results. Since there were differences between the experimental and finite element analysis results, and between the finite element analysis and real-time simulation results, future research should include modification of the finger model geometry, modification of the finger model characteristics in the finite element analysis, and modification of the finger model characteristics in the finite element analysis and real-time simulation. identification of the factors that cause the difference between the finite element analysis and the real-time simulation SED calculations. The future work includes identifying the causes of the differences between the finite element analysis and the real-time simulation SED calculations.
View full abstract
-
Shardul KULKARNI, Satoshi FUNABASHI, Alexander SCHMITZ, Tetsuya OGATA, ...
Session ID: 1A1-O07
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Performing dexterous tasks with multi-fingered hand is still a challenge. Tactile sensors can provide touch states and the object features during in-hand manipulation. However, even with such rich data of touch states, achieving dexterous multi-fingered tasks is further complicated because of the underlying complexities. This paper presents a method for object property recognition, a Multi-Thread GCN (MT-GCN) architecture to process tactile and edge features and basically multi-modal data in a graph. The MT-GCN with tactile and edge features achieved high recognition rate, 86.08% for 6 classes of object property combinations from 8 objects. We could confirm that the graph edge features acquired by real robotic configuration and the MT-GCN architecture were effective for multi-fingered dexterous tasks.
View full abstract
-
Shota SHIMADA, Satoshi FUNABASHI, Alexander SCHMITZ, Tetsuya OGATA
Session ID: 1A1-O08
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Recognition of the characteristics of objects and the recognition of the objects themselves are necessary. The objective of this research is the simultaneous recognition of multiple similar objects based on tactile and coordinate information of objects obtained by grasping movements of a multi-fingered robot hand equipped with a 3-axis tactile sensor. We propose PointNet++, which can capture information locally and globally, as a proposed method. The research objective was achieved by learning with the model of the proposed method. In addition, as a comparative method, we used FNN and PointNet++. In addition, the results of multiple classification tasks with different test data showed that the proposed method was able to recognize multiple similar objects at the same time with high accuracy and stability.
View full abstract
-
Michikuni EGUCHI, Takefumi HIRAKI
Session ID: 1A1-O10
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Pixel-level Visible Light Communication (PVLC) is a method that transmits invisible information by rapidly flickering the brightness of each pixel of projection images by a projector. It has various advantages in its application to mobile robot control cooperating with images. However, its application has been limited to flat-moving robots because the transmitted information is limited to planar information. To realize control of aerial robots cooperating with images, we propose a 3D localization method for aerial robots from the planarly transmitted position information through PVLC. This method combines the spatial relationship of the photosensors placed at the four corners of the aerial robot and the position information of the projection images captured by the photosensors to solve the Perspective-n-Point (PnP) problem. This allows estimating the robot’s 3D position with the projector’s position as the origin. This paper verifies the accuracy of positions obtained from the proposed method and computation time to confirm its effectiveness for aerial robot control.
View full abstract
-
Kei MIYAGAWA, Yoshitaka HARA, Sousuke NAKAMURA
Session ID: 1A1-P02
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this paper, we propose a method to project semantic segmentation results from monocular camera images onto 3D lidar point clouds. Our method uses mmsegmentation trained models for semantic segmentation and image geometry for projection. In our experiments, we used NVIDIA Isaac Sim, and evaluated the semantic segmentation and the projection of the label information in multiple environments. We compared several models of semantic segmentation and confirmed that the label information was correctly projected onto the point clouds.
View full abstract
-
Kazuki ADACHI, Yoshitaka HARA, Sousuke NAKAMURA
Session ID: 1A1-P03
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this paper, we investigate how scale correction is performed in monocular visual SLAM. Monocular visual SLAM has a problem of scale drift. We focus on ORB-SLAM3 that uses pose graph optimization to correct the scale. We investigate the behavior of scale correction before and after loop closing occurs. In our simulation experiments on several environments, we evaluated three types of data: camera trajectories, scales of essential graphs, and 3D reconstructed point clouds at locations where loop closing occurred. From these results, we showed the behavior of scale drift and scale correction.
View full abstract
-
Ryohei MATSUSHITA, Sota AKAMINE, Taku ITAMI, Jun YONEYAMA
Session ID: 1A1-P05
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This study proposes an algorithm to improve the accuracy of person recognition during sprinting motion by focusing on motion blur, which is a dynamic noise in point cloud data acquired by LiDAR. The effectiveness of the proposed method is demonstrated by comparing the recognition accuracy.
View full abstract
-
Ohara Natsuki, Miyahara Keizo
Session ID: 1A1-P06
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this study, we detail a streamlined object detection framework intended for autonomous vehicles, primarily designed to initiate emergency braking. Our focus is predominantly on utilizing solely a digital camera as the sensory component under circumstances commonly affected by atmospheric conditions such as haze. The performance of optical sensors, digital cameras included, typically deteriorates due to such environmental factors. We evaluate various“ dehazing ”techniques pertinent to vehicular applications within this document and suggest an architecture that incorporates an effective dehazing algorithm to enhance safety measures. Experimental findings demonstrate the practicality of this architecture and its suitability for real-time operational needs.
View full abstract
-
Shingo IRIYAMA, Sarthak PATHAK, Kazunori UMEDA
Session ID: 1A1-P07
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this study, we evaluated 3D measurement in real environments using 3D measurement sensor consisting of a spherical camera and a ring laser proposed by conventional study. However, we cloud not have obtain results that we expected. We consider that the cause of failure in high-precision 3D measurement was that we cloud not calculate accurate geometric relationship between the camera and the laser. Therefore, in this study, we propose a calibration method for this sensor. Using two spherical stereo cameras, we reconstructed the 3D point cloud of laser plane. We conducted plane estimation on the reconstructed point cloud. We proposed a method to determine the geometric relationship between the camera and the laser based on plane estimation.
View full abstract
-
Yusuke ONOZEKI, Shingo IRIYAMA, Ryota OGASA, Sarthak PATHAK, Kazunori ...
Session ID: 1A1-P08
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this paper, we propose a system for localization using semantic information and distance between camera and each object from spherical stereo camera. Recently, demand for a servicing robot has been increased due to understaff for work and increasing industrial accidents in aging infrastructures. Thus, our final goal is to make general people be able to operate robots due to workforce diversification. In this paper, we aim to locate a robot more accurately by using distance information. In our method, we locate a robot from a set of object center-of-gravity points on a 2D semantic map prepared in advance and a set of corresponding object center-of-gravity points obtained from a spherical stereo camera. Through the experiments, when all matching between the point clouds was successful, a robot’s rough localization was accurate.
View full abstract
-
Hikaru CHIKUGO, Sarthak PATHAK, Kazunori UMEDA
Session ID: 1A1-P09
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this study, we propose a method for road surface estimation and obstacle detection using a fisheye stereo camera. In obstacle detection using stereo cameras, obstacles are detected based on distance information obtained through stereo matching. However, due to processing load, this cannot be done quickly. And obstacle detection methods using deep learning cannot accommodate obstacles that are not included in the training data. Therefore, we first detect obstacles on the road surface using the relative depth obtained from monocular depth estimation. Then, by focusing only on the obstacle regions for distance measurement, we aim to quickly detect all obstacles. Experiments demonstrate the ability to detect only obstacles and the high processing speed.
View full abstract
-
Saki NISHIMOTO, Tomoyuki YAMAGUCHI
Session ID: 1A1-Q01
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
It is important for pedestrians to be careful about safety to prevent traffic accidents while walking. The purpose of this research is to reveal how to change pedestrian behavior by applying direct physical stimulation. We propose a method to create a shoe device that supports walking during normal walking, but gradually reduces support before the stop position, and then stops at the stopping position. In this paper, we developed a shoe device using a mechanism with springs and cams. When the ankle joint is dorsiflexed, it stores energy in the elastic member, and when it plantarflexes, it can release that energy to compensate for push-off. The compressed state of the spring can be maintained by rotating the servo motor. This allows the support to be removed during walking. We experimented to investigate the effect of wearing the device. The results showed that it was possible to change walking behavior intuitively.
View full abstract
-
Kota OGIKUBO, Jun INOUE
Session ID: 1A1-Q02
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Shoes are prescribed to diabetic patients in the medical field, but it is difficult to judge whether the shoes fit their feet or not. The aim of this study is to develop a system that can estimate shear forces from vibrations and evaluate shoe fit in real time. Vibrations occurring between the foot and shoe during walking were reproduced. The vibrations occurring between the foot and shoe during walking were reproduced and measured using a Piezoelectric Wire Sensor. Shear forces were estimated from the measured vibrations using machine learning. In this study, a method of embedding the piezo-wire sensor in the fabric was also investigated.
View full abstract
-
Junya KOBAYASHI, Nobuaki NAKAZAWA
Session ID: 1A1-Q03
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In recent years, Japan's declining birth rate and aging population have become serious problems. Under such circumstances, there is a concern about the shortage of caregivers in elderly care facilities, especially when looking after residents. Therefore, the introduction of camera-based monitoring systems in elderly care facilities could help reduce the burden on caregivers. However, caregivers may feel averse to being captured by surveillance cameras, so there is an issue of privacy. Our proposed solution to this problem is a camera-based monitoring system that focuses only on the feet of pedestrians. In this study, the foot video is compressed into a single feature image. This image is converted to an image called Flat Feet Image. Flat Feet Image and foot videos were used to extract gait features such as step length and gait velocity. Cross-validation with a support vector machine using the obtained gait features achieved an 87% accuracy.
View full abstract
-
Naojiro ITOH, Mikihiro HOSHI, Nobuaki NAKAZAWA
Session ID: 1A1-Q04
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Workers at construction sites during the summer work under the blazing sun, and the risk of heat stroke is extremely high. Therefore, it is essential to confirm that heat stroke countermeasures are in place and to ensure worker safety, which is one of the major challenges. In this study, we analyzed the characteristics of water-drinking motions to detect whether workers are hydrated or not. We used the coordinates of the joints and facial feature points obtained by skeletal detection to capture the characteristics of the drinking motion, and investigated the differences between the drinking motion and similar motions.
View full abstract
-
Keisuke SHOJI, Hikari KIKUCHI, Shinya KAJIKAWA
Session ID: 1A1-Q05
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We have developed a tongue-operated joystick device for tongue mobility training and rehabilitation. This device has the ability to sense operational distance and forces in both horizontal and vertical directions. Additionally, adjustable viscoelastic resistance allows individual tailoring of training intensity. In this paper, we investigated the fundamental performances regarding to the ability of sensing forces and the traceability to fast tongue operation. These results show that the device can estimate the operation force accurately and trace the movement up to 5Hz. Moreover, the changes in tongue operation for viscoelastic resistance are also analyzed and confirmed the effectiveness for tongue training.
View full abstract
-
Mao HOSODA, Wataru YOSHIDA, Shinya KAJIKAWA
Session ID: 1A1-Q06
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We have developed a forced head rotation device to assist visually impaired people with gait navigation. This device comprises a head control unit and an environmental sensing unit based on a depth camera. It aims to control the head direction to avoid obstacles and follow a desired path. In this paper, we estimate the direction of the trunk using the IMU sensor unit of the depth camera and attempt to control the head direction to follow the desired path. Through several experiments, we investigated the changes in gait trajectories due to the head rotation angle to determine a suitable rotation angle.
View full abstract
-
Keisuke GOTO, Kyo KUTSUZAWA, Dai OWAKI, Mitsuhiro HAYASHIBE
Session ID: 1A1-Q07
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
The Hybrid Brain Computer Interface system (hBCI), which combines two or more Electroencephalography(EEG) modes, has flourished due to the development of instruments and methods. In particular, hBCI combining Steady State Visually Evoked Potentials (SSVEP) and Motor Imagery (MI) has attracted attention for its detection stability and multi-class classification performance. However, the accuracy of simultaneous detection has been suggested to decrease due to the interference of EEG modes. In this study, we evaluate the effect of EEG interference in SSVEP × MI hBCI and investigate the optimal SSVEP frequency setting.
View full abstract
-
Asuka YOSHIDA, Katsuyoshi TSUJITA, Shintaro NAKATANI
Session ID: 1A1-Q08
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
A typical symptom of stroke is motor dysfunction. This can be interpreted as a state in which there is a gap between "motor intention" and "actual body. Therefore, to improve symptoms, it is important to consider the patient's "awareness of motor intention" and "awareness of the actual body. In these two types of awareness, the focus has so far been on the movement with the intention to initiate action: the conscious movement. However, investigating also unconscious movements, those that do not intend to initiate action, will provide important clues for a deeper understanding of these attitudes. In this study, we used a unique drawing robot to examine experimental conditions for unconscious movements and to investigate their presence. The results suggest the existence of unconscious motion.
View full abstract
-
Tatsuya KOMAGOME, Satoshi OKI, Jun INOUE
Session ID: 1A1-Q09
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
There are more than 80,000 upper-limb amputees in Japan, and many of them desire to use an electric prosthetic hand. Myoelectric prostheses are one type of electric prostheses, but they are affected by skin impedance and cannot be used for long periods of time. In this study, we measured muscle vibration using piezoelectric wire sensors, which has both muscle tone and muscle protuberance characteristics that are not easily affected by skin impedance. The results of eight motion classifications using a neural network showed that the average discrimination rate was 80% or higher for all subjects. When the motion classification was performed using only rock-paper-scissors motion, subjects A and B showed an average discrimination rate of more than 95%. These results suggest the possibility of motion classification including complex finger actions.
View full abstract
-
- Comparison with the case using model predictive control -
Suzuka SEKI, Jun ISHIKAWA
Session ID: 1A1-Q10
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This article discusses the results of evaluating the performance of a recurrent neural network (RNN)-based human behavior model proposed for estimating human behavior that can be used for driver assistance. Parameters of the RNN-based model were identified so that the response of the closed-loop system can reproduce the actual output acquired through the positioning experiment with human steering wheel operation. The performance is evaluated in both the time and frequency domains, compared to the model based on model predictive control (MPC). As a result, it was confirmed that the RNN-based model reproduced human behavior with almost all the same accuracy as the MPC-based model.
View full abstract
-
Ryuto YAMAZAKI, Shinichi HIRAI
Session ID: 1A1-R01
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Originally, robots should be isolated from humans when working. However, in these days, the number of robots collaborating with human has been increasing. As the result, robots are expected to work under various conditions, and it is necessary to perform steady sensing in any environment.
In this research, we propose a sensor that can estimate the distance and moving direction of a nearby object in any environment by measuring magnetic permeability. Additionally, the sensor can detect a detectable object which is occluded by another detectable object from the sensor. We experimentally verified that the performance of our sensor is practical.
View full abstract
-
Takashi TSUCHIMOCHI, Noriaki KANAYAMA, Naofumi OTSURU, Masahito MIKI, ...
Session ID: 1A1-R02
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Biological information such as heart rate, respiratory rate, and blood pressure are often used in clinical settings to assess human health and physiological states. The sensory representation of these internal physiological states in human beings is referred as the interoception, which is closely linked to self-awareness and emotions. Hence, better understanding of the interoception is crucial in medical practice and has garnered significant attention. However, it is difficult and unintuitive for laypersons to accurately read and understand the interoception as well as physiological states from the numerical biological information obtained by various sensors. In the present study, we conducted using a virtual reality system. Specifically, we attempted to visualize and manipulate the interoception by presenting heartbeats in a form where the internal representation of embodied avatar is visualized and peeked into.
View full abstract
-
Abhijeet RAVANKAR, Ankit RAVANKAR, Arpit RAWANKAR
Session ID: 1A1-R03
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Robot navigation is a critical component of mobile robots. In non-autonomous modes, robots are typically controlled using direct commands through keyboard, joystick, or mouse. However, in certain cases, such as robotic wheelchairs, a disabled patient might not be able to use these methods. This paper presents an indirect method of robot control used for navigation. Our method uses analyzing EEG signals of brain though a deep learning network which is used to control a robot. The system can detect four states of the person with good precision. These states can be used for controlling a mobile robot. Results with actual devices show that the method can accurately predict user’s motives. The robot control is demonstrated through a simulation software.
View full abstract
-
Abhijeet RAVANKAR, Ankit RAVANKAR, Arpit RAWANKAR
Session ID: 1A1-R04
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Autonomous vehicles and self-driving cars are under development and several test phases have successfully been carried out. Robust road detection in different environments is an essential component of self-driving vehicles. Compared to traditional image processing techniques, recent developments in deep learning has enabled robust road detection. This paper proposes road detection using semantic segmentation method. The experiment was conducted with actual sensor and the results were analyzed using different metrics.
View full abstract
-
Masatomo Arai, Takeshi Hayakawa
Session ID: 1A1-R06
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Rotation of cells is required for three-dimensional observation or orientation control of cells in various fields such as cell biology or medicine. We have proposed a method of microobject rotation based on vibration-induced flow. Vibration-induced flow is a localized flow that is generated around a microstructure when the vibration is applied to the microstructure. In this paper, we evaluate the rising characteristics of the rotation of the particle and analyze the dynamics of the horizontal rotation manipulation by comparing the experimental values of the angular change with the theoretical values acquired from the equation of motion of the rotation.
View full abstract
-
Akito WATANABE, Kyoka NAKANO, Yoshiyuki YOKOYAMA, Takeshi HAYAKAWA
Session ID: 1A1-R07
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In recent years, microfluidic devices have attracted attention as a tool for single-cell analysis. Among the components of microfluidic devices, valves are important for fluid control within the devices. In our laboratory, we use thermoresponsive hydrogel as microvalves that can be actuated by using a light irradiation. By using light irradiation, we can drive them independently and integrate large number of microvalves without complex wirings. This integrated microvalve can be used to apply stimuli, such as drag or cytokines to single cells. However, there is a possibility of reagents permeating through gel valves. Therefore, in this study, we evaluate the permeability of some molecules to gel valves for design of gel valves that are suitable for single cell analysis.
View full abstract
-
Aoi HAYASHIBARA, Toshio TAKAYAMA
Session ID: 1A1-R08
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
A sidewall-driven micromixer is a microfluidic device that can agitate the fluid in the main chamber by deforming the side walls of the main chamber by the driving chamber which is surrounding the main chamber. This has the advantage of being simple in construction, as there is no need to incorporate actuators directly in the flow path. It has been observed that when structures are placed in this chamber, vortices are generated at its edges. In this study, we developed the micromixer capable of gathering particles in one place by using this feature. Based on the referred research on microfluidic devices capable of producing spheroids, teardrop-shaped structures were placed in the chamber in a circular pattern, and it was confirmed that particles with a diameter of 10 μm gathered in the center of the chamber.
View full abstract
-
Hayato MAKI, Toshio TAKAYAMA
Session ID: 1A1-R09
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Sidewall-Driven Micromixer is a mixer that performs by applying pressure to the sidewall around a chamber on a flow path within a microchip. This mixer is easy to make and can trap cells, so it can contribute to improving the efficiency of chemical mixing and cell culture. Until now, this mixer has generated a concentration gradient by driving multiple chambers with a single actuator. In this study, we aim to mix different fluids using mixers with one actuator attached to each chamber. To prevent unintentional mixing, the neck width of the chamber was made narrower than before. We conducted an experiment in which colored water was mixed in each chamber and confirmed that it was possible to mix colored water as intended.
View full abstract
-
Yuta Tanaka, Toshio TAKAYAMA
Session ID: 1A1-R10
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This study proposes a method for manipulating a particle in a microchannel using a sidewall-driven peristaltic micropump. The sidewall-driven peristaltic micropump is actuated by air pressure and can continuously push fluid in a microchannel by deforming the sidewall of the channel to generate peristaltic motion. By varying the pressure of the air supply, the velocity of the flow produced by the micropump can be varied. By changing the direction of peristaltic motion and the driving pressure the micropump can control the flow rate with high precision. We constructed a visual feedback control system using images in the microchannel to control the position of a microbead. It was confirmed that the precise positioning of a particle could be performed automatically.
View full abstract
-
Makoto SAITO, Yoko YAMANISHI, Shinya SAKUMA
Session ID: 1A1-S03
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Perfusion cell culture is a significant technique for long-term culture by continuously exchanging media. Remarkably, microfluidic perfusion provides the advantages such as stable environment, low sample consumption and observability in microscope, which allows the observation of cell-cell communications. However, it is challenging to introduce/obtain the tiny samples into/from perfusion systems due to the use of large-sized chamber for culture medium placed outside. Here, we propose on-chip perfusion system by utilizing the piezo-driven membrane pump. In this method, we employ the asymmetric flow resistors which behave like diode by generating different vortices to flow direction leading direction-dependent flow resistances. This component can rectify the oscillatory flow of the membrane pump, and thus we can generate perfusion in microfluidic circuit. Using the method, we introduced the tiny samples of <100 μl and generated perfusion. These results indicate that our method can manipulate the tiny samples like floating cell to monitor cell-cell communications.
View full abstract
-
Kota TAIMA, Yoshiyuki YOKOYAMA, Takeshi HAYAKAWA
Session ID: 1A1-S04
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We evaluated nanoimprint lithography (NIL) for the fabrication of nanoscale gel robots. In recent years, microrobots have attracted attention because they can be used for micromanipulation. Our group has proposed microgel robots made of thermoresponsive hydrogel that can be assembled and disassembled. However, it is difficult to fabricate those microgel robots in the size of several micrometers that enable us to manipulate cells by previous fabrication process using photolithography. In this paper, we fabricate gel patterns using NIL and investigate its potential for the fabrication of nanoscale gel robots. We succeeded in fabricating an 80 μm circular patterns by NIL.
View full abstract
-
Kanon HAMA, Hinako SATO, Yoshiyuki YOKOYAMA, Takeshi HAYAKAWA
Session ID: 1A1-S05
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We designed and actuated a biomimetic micro-gel robot to understand a motion of microorganisms. We fabricated a micro-gel robot with two flagella to reproduce a motion of Chlamydomonas, which is known as model microorganisms. The micro-gel robot is made of temperature-responsive hydrogel (PNIPAAm) mixed with a light absorbing material and can be driven by light irradiation. We decided the design value of robot considering the geometric features of Chlamydomonas and made a difference in concentrations of a light absorber at the root of flagella to realize a breaststroke motion of Chlamydomonas. We successfully actuated a micro-gel robot and compared its motion of flagella to that of Chlamydomonas. Moreover, we succeeded in observation and comparison of the flow around a micro-gel robot and Chlamydomonas.
View full abstract
-
Takumi KIYOTA, Taro TOYOTA, Kazuaki NAGAYAMA, Kaoru UESUGI
Session ID: 1A1-S08
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Recently, molecular robots sensing external information and driving like cells are being studied. Among these, an amoeba-type molecular robot uses liposomes that are composed of lipid bilayer for the chassis. The amoeba-type molecular robot encapsulates molecular motors (kinesin)and microtubules and the actuation of molecular motors pushes and deforms the chassis. If we can predict the push force for the deformation of liposomes, we evaluate the amount of needed molecular motor inside liposomes. In this study, we evaluated mechanical properties (elasticity and viscosity) of liposomes with different membrane compositions by the micropipette aspiration method. As a result, the liposome elasticity increased when the liposome membrane contained more POPC. Furthermore, when liposome membrane did not contain cholesterol, elasticity of liposomes became low. From these results, as well as cholesterol, the membrane composition of phospholipids can control the elasticity of liposomes.
View full abstract
-
Atsushi EDA, Hiromasa OKU
Session ID: 1A1-S09
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Brain research is being conducted around the world. However, the mechanism by which simple neuronal activities are combined into a network to realize sophisticated brain functions remains largely unexplored. For this reason, research in the field of neuroscience is being conducted on C. elegans, which has an extremely simple brain and whose entire neural network has been completely elucidated. Although C. elegans is a relatively simple organism, it shares many essential biological characteristics with humans. Understanding the biology of the nematode C. elegans will develop into an understanding of humans. In the study of the nervous system, recent years have seen rapid progress in a field called optogenetics, in which light can be used to stimulate the nervous system through genetic manipulation. Therefore, in this study, we constructed a low-latency and high-speed microprojection system for manipulating neural activity in the nervous system of the nematode C. elegans.
View full abstract
-
Kei HARADA, Hirotaka SUGIURA, Fumihito ARAI
Session ID: 1A1-S10
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
The development of a cell isolation tool that can get a neutrophil from human blood vessels within Human Liver Bud Organoid (HLBO) transplanted into the mouse brain is desired. The conditions for inserting a cell isolation tool into HLBO from mouse Cranial Window must be clarified. We made an assumption that the success or failure of inserting a micropipette into biological tissue was determined by multiple factors, such as the mechanical properties of the object to be inserted, the shape of micropipette tip, and the impulse during insertion. To characterize the performance of the micropipette insertion, we fabricated an artificial transparent gel tissue whose mechanical properties were similar to those of living organoid tissues. Then, the mechanical interaction between the micropipette and the gel before and after insertion was evaluated using the piezoelectric impact drive mechanism (IDM). The micropipette fabricated of a thin-walled glass capillary was fixed and driven by the IDM. We used high-speed vision to analyze movements such as the average impact force and speed during micropipette insertion. The results suggested that there might be a certain threshold for the impulsive force generated at the pipette tip with respect to the success or failure of insertion and the selection of an appropriate pipette tip shape were effective for insertion.
View full abstract
-
Kazusa Otani, Hirotaka Sugiura, Shiro Watanabe, Turan Bilal, Satoshi A ...
Session ID: 1A1-T01
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This paper presented the three-dimensional oocyte manipulation system for the two-electrode voltage clamp (TEVC) experiment. We focus on contact and penetration detection during the capillary insertion process into oocytes. Instead of the conventional methods such as contact detection using backward difference information of images and penetration detection using QCR force sensor, we propose a method using tracking of feature points in the cell region by Optical Flow. By focusing on the displacement of each feature point in the X-direction (capillary insertion direction), we found that signals can be obtained at the timing of contact or penetration. We also demonstrated that the proposed method is superior to the QCR force sensor in terms of contact detection, low-noise signal, and multi-point information.
View full abstract
-
Hinako YOSHIMURA, Chao-Shin HSU, Masakiyo TAKAHASHI, Yingzhe WANG, Tak ...
Session ID: 1A1-T02
Published: 2024
Released on J-STAGE: December 25, 2024
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In recent years, advances have been made in the fabrication of micro grippers for manipulating micro-objects, and research has also been conducted on selecting molecular artificial muscles as driving sources that can be easily integrated into complex microstructures. This study aims to take the first step towards extending the operation of micro grippers driven by molecular artificial muscles, which have previously been limited to planar motion, by prototyping and evaluating a micro gripper that operates in the z-axis direction. A new experimental setup was constructed to observe the movement of the micro gripper in the z-axis direction, and displacement by the molecular artificial muscles of the micro gripper fabricated using photolithography, one of the microfabrication techniques, was confirmed.
View full abstract