-
Ryohei Ueda, Takashi OGURA, Shigeru KANZAKI, Kimitoshi YAMAZAKI, Masay ...
Article type: Article
Session ID: 2A1-D17
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Humanoid Robots need to deal with a various kinds of objects including deformable objects to assist human activities in daily life. In this paper, we present a method to evaluate the stability of mat holding motion using deformable object simulation. The key of our method is to step simulation in motion planning for deform a flexible object.
View full abstract
-
Toshiaki MAKI, Shunichi NOZAWA, Shigeru KANZAKI, Kei OKADA, Masayuki I ...
Article type: Article
Session ID: 2A1-D18
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Humanoids are expected to expand their playing field outside the laboratory. To widen their appeal, environment contaction works such as door opening, object transport are something that must be done. Researches on environment contaction motion s that have been done are often premised physical parameters about unknonw objects. However, to manipulate unknonw objects in the unknown environment, acquisition of target's physical model by humanoids will be needed. In this paper, we firstly categorize model estimations required for unknonw object operation. Then, we estimate physical model of a door offline. And using this model, we realize door opening motion by HRP2-JSK.
View full abstract
-
Mitsuharu KOJIMA, Kei OKADA, Masayuki INABA
Article type: Article
Session ID: 2A1-D19
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Methods for a daily assistive humanoid robot to manipulate and recognize the objects incorporating joints are presented. It is necessary for humanoid robots to use the objects incorporating joints such as some furniture and tools to provide daily assistance. We have been tried to make an integrated humanoid robots recognition and manipulation system of the objects and tools in the real world. We extend the system for the objects incorporating joints. We present three key techniques to recognize and manipulate the objects incorporating rotational and linear joints. 1) Knowledge Description for Manipulation and Recognition of these objects 2) Motion Planning Method to Manipulate them 3) Recognition Method of them Closely Related to the Manipulation Knowledge. Finally, a daily assistive task experiment in the real world using these elements is shown.
View full abstract
-
Satoru TOKUTSU, Kei OKADA, Masayuki INABA
Article type: Article
Session ID: 2A1-D20
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
For daily assistive robots sharing household space with human beings, it is important to recognize daily life sounds and to select an appropriate behavior autonomously. To use auditory information have benefits, for example, to reason about unseeing situations. In this paper, we examine an algorithm for daily life sound recognition for using in several scenes. Throught this examination, we propose a framework for using daily life sound recognition in daily assistive robot system.
View full abstract
-
Kei OKADA, Mitsuharu KOJIMA, Yuto MORI, [in Japanese], Masayuki INABA
Article type: Article
Session ID: 2A1-D21
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In order to grasp unknown object, recognizing an arbitrary object with abstracted shape model seems an efficient approach. This paper describes visual recognition of cylinder primitive model using adaptive shape model and 3D feature points.
View full abstract
-
Ryusuke UEKI, Mitsuharu KOJIMA, Hiroaki YAGUCHI, Kei OKADA, Masayuki I ...
Article type: Article
Session ID: 2A1-D22
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Methods for humanoid robot to recognize the translational and rotary motion of 3D object and chase it are presented. By using high speed visual processing and filtering techniques, for example, kalman filter and particle filter, we can chase more features of 3D object effectively and understand the postion and posture clearly. The tracking system will enable humanoid robot to be quick in action in real time.
View full abstract
-
Shigeki SUGANO, Hiroyasu IWATA, Taisuke SUGAIWA
Article type: Article
Session ID: 2A1-D23
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
The Appearance of human symbiotic robot has to be designed from the point of view of the human-friendliness. In this paper, the appearance design of human symbiotic robot TWENDY-ONE that realizes a human-friendliness and the design requirements from the point of safety and workability is introduced. The cover of TWENDY-ONE is shaped round wholly, and coated with special coatings which exert the soft tactile to humans, and so human can touch with TWNDY-ONE without to be conscious of a feeling of machinery. On the other hand, a part of inner-body is exposed to sustain the movable range of the joint, and is colored in metallic red, and so the appearance design of TWENDY-ONE still retains the remnants of the mechanical-looking. This design method exerts the neutral feelings between mechanical and human-like looking, and realizes the design requirements in safety and workability.
View full abstract
-
Ee Sian NEO, Takeshi SAKAGUCHI, Kazuhito YOKOI
Article type: Article
Session ID: 2A1-D24
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Our research group has been developing on-line behavioral operation technologies that enable humanoid robots to perform tasks in human environment integrating speech recognition, object recognition using 3D vision and online whole-body motion generation technologies. This paper tackles this integration problem by addressing the issues of representing knowledge of actions which facilitates natural language instructions for tasks in indoor human environments. We propose a lexicon of basic actions and behaviors in this preliminary attempt to construct a reliable and flexible natural language instruction system. We describe the implementation of the proposed online behavioral operation system on our humanoid robot HRP-2, which is able to detect the direction of a speaker from within 2 meters and receive natural language instructions from the user using microphone arrays connected to a speech recognition embedded system on-board the robot.
View full abstract
-
Kazuki UCHIDA, Akira TORIGE, Takeshi KATURA
Article type: Article
Session ID: 2A1-E02
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
The camera is installed in the autonomous movement robot for outdoor, it takes a picture of the road, and the direction for which the robot is suitable is presumed from the direction of the road block taken a picture of. Moreover, the self-position where Odometory is presumed is corrected from the direction of the robot that presumes.
View full abstract
-
Shin'ya OKAZAKI, Takayuki TANAKA, Shun'ichi KANEKO, Akihiko ...
Article type: Article
Session ID: 2A1-E03
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This study aims to obtain exact visual information in irregular ground. We use visual information obtained by stereo measurement. We can approximate disparity error distribution by normal distribution. At first, we examine relationship between parameter of disparity error and parameter of camera vibration. Next, we define existing probability using parameter of disparity error, and we use existing probability for position estimate. Specifically, we obtain visual information of without vibration or near offer in irregular ground through the use of relationship between parameter of camera vibration and parameter of error.
View full abstract
-
Tomohiro UCHIMOTO, Sho'ji SUZUKI, Hitoshi MATSUBARA
Article type: Article
Session ID: 2A1-E04
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
The purpose of this research is development of localization for leg type mobile robots. We focus on localization by images captured by a camera on the robot. However, images captured by a leg type robot are unstable and matching of them is difficult. So we propose a robust matching method using Support Vector Machine (SVM). We chose color histogram and edge histogram as image features. A robot position in environment is estimated from SVMs. In this paper, we evaluate the proposed localization method with image sequence taken in real environment by a robot with four legs.
View full abstract
-
Koshiro YAMAMOTO, Jun MIURA
Article type: Article
Session ID: 2A1-E05
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper presents a view-based localization method for vehicles and mobile robots in large-scale outdoor environments. The method works in two phases. In the first, learning phase, we acquire an image sequence while moving on a route by car. In the second, localization phase, we perform localization by comparing the input image with the learned images. An important problem in view-based outdoor localization is the change of object views due to changes of weather and seasons. Our method copes with this problem by first recognizing object by considering such view changes and then by comparing the recognition results of the learned and the input image. We use SVM (support vector machine) for object recognition and localization. We then develop a probabilistic localization method to consider the history of past movement and the uncertainty of recognition. Using a state transition model and a probabilistic model, we perform a probabilistic localization to estimate the probabilistic distribution of the car position.
View full abstract
-
Hiroaki Masuzawa, Jun Miura
Article type: Article
Session ID: 2A1-E06
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper describes a method of acquiring a geometric structure of the environment and the positions of important objects (called environment information summarization) in indoor environments. For efficient sharing of environment information between a robot and a user, it is effective that the robot extracts and provides only an important part of the information to the user. It is also important to make a plan for the robot to effectively collect necessary information. We use a SLAM method to reconstruct the geometric structure of the environment. We use a color histogram comparison and an edge pattern matching to recognize objects. For efficient object recognition, we predict the data to be obtained by the observation of each object candidate at a nearer position, and determine whether to observe the candidate again or not. The environment information summarization using this observation planning is faster than a simple method which tries to recognize every object candidate.
View full abstract
-
Tomoya ONISHI, Kazunori UMEDA
Article type: Article
Session ID: 2A1-E07
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper proposes a method for speed-up of indoor self-localization system. Infrared LEDs are used as landmarks. They are invisible and thus do not irritate a human. They are set at known positions, and a CCD camera on a mobile robot observes them and self-localization is carried out. A method to obtain position in two-dimensional plane and orientation with three or more LEDs is formulated. The nonlinear least squares method is applied. Experiments verify stable self-localization when the robot moves at a speed of 50 centimeters per second.
View full abstract
-
Masayuki NAKATSUKA, Kazuhiro SHIMONOMURA, Kunihiro MORI, Kazuo ISHII, ...
Article type: Article
Session ID: 2A1-E08
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We designed a low power and compact binocular robotic vision system. The system consists of two silicon retinas and FPGA circuits and can calculate depth and velocity in real-time. Algorithm of computation we developed is inspired by the hierarchical architecture of the neuronal network of the primary visual cortex. We applied the system to visual navigation of a mobile robot in a real environment.
View full abstract
-
Ryuugo MOCHIZUKI, Kazuo ISHII
Article type: Article
Session ID: 2A1-E09
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Creatures usually perceive their environment using their nerves and decide their motions. For example they detect obstacles to avoid them, and detect landmarks to recognize their positions. As regards the visual information, the usage occupies one of the most important information for creatures, and also important for robots. In this research we use visual information for operating autonomous mobile robot. Two different kinds of image processing are executed. One is road detection for recognizing the places that robot can pass. The other is landmark detection for knowing the position on the desired route. We experimented using these method for robot control.
View full abstract
-
Masahiko SUZUKI, Masashi OGASAWARA, Taro SUZUKI, Yasuharu KUNII
Article type: Article
Session ID: 2A1-E10
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Environment data is most important information for the movement of mobile robot, and it has much uncertainty because of measurement and odometry errors. Therefore mobile robot has to get environmental information during its travel frequently, and landmark and its position provide one of the most important information. However those system has much calculation cost and is not strong enough against natural object.In this paper, proposed measurement method, which is visual tracking and measuring of landmark by using Mean-shift algorithm, is introduced and applied to our rover testbed and evaluated with experimental results of proposed Command Path Compensation method, which is the adaptation mechanism of operator's path command to actual environment.
View full abstract
-
Atsushi SANADA, Kazuo ISHII, Tetsuya YAGI
Article type: Article
Session ID: 2A1-E11
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In these days, research of autonomous system for robots is proceeding. And various sensors are used in order to acquire surrounding information. Especially vision information is important and many information, including a color, a size, a motion, etc. of an object, is acquired. However, a vision system becomes complicated. Then, line trace was performed using the silicon retina camera.
View full abstract
-
Kengo ARAI, Yoshinobu ANDO, Makoto MIZUKAWA
Article type: Article
Session ID: 2A1-E12
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper describes the recognition of the signal for the pedestrian of the LED-type. In recent years, LED-type signal begins to spread, it is not enough only to be able to recognize a past signal to use in outside. So, we propose to aim at the achievement of the system that can judge the signal needed by an autonomous robot in outdoor, and to recognize the LED-type signal.
View full abstract
-
Ryohei Ozawa, Mamoru Minami
Article type: Article
Session ID: 2A1-E13
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
The paper proposes a method of allocation of cognitive resources for real-time multiple object recognition in robot vision. In human beings and even high-performace computers, recognizable number of objects is limited at a time, since cognitive and computational resources are limited. For robot vision, real-time distribution of computational resources is effective for recognition of multiple objects in dynamical changing environment. In the proposed method, recognition process is performed by model-based matching using Genetic Algorithm (GA). Positions of object models are represented by the genes of GA. The individuals of GA are allocated for recognizing multiple landmarks efficiently depending on the situation and priority of the object recognition. The proposed method improves recognition rate of multiple objects.
View full abstract
-
Hiroaki YAGUCHI, Zaoputra NIKOLAUS, Naotaka HATAO, Kimitoshi YAMAZAKI, ...
Article type: Article
Session ID: 2A1-E14
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In View-based Navigation, view sequence are constructed considering with only appearance of images. This aproach can work only in limited situation because there are no consideration about structure of environment and camera poses with 3D camera motion. In this paper, We construct multi sensor system using omnidirectional camera, motion sensor and laser range finder, and propose a method of construction view sequence considering about 3D environment and camera poses.
View full abstract
-
Isaku NAGAI
Article type: Article
Session ID: 2A1-E15
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Visual odometry by tracking floor image can be used for localizing mobile robots which run on slippery ground or have no wheel for moving. In the method cumulative error increases in measuring motion with larger rotational radius. For improving measurement precision of translation and rotation it is important to make error model on the method. In this paper, an error model is proposed and the appropriateness is assessed by comparing with experiment results.
View full abstract
-
Yoshiteru Matusita, Jun Miura
Article type: Article
Session ID: 2A1-E16
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper describes a method of simultaneously estimating the road region and the ego-motion for outdoor mobile robots. Temporal integration of sensor data is effective for robust estimation of road region. To integrate sensor data obtained at multiple places, the robot's ego-motion has to be estimated simultaneously. It is also necessary to use multiple sensors for reliable estimation because road boundary features from one sensor are not always available. In addition, to cope with the change of the road type, we prepare multiple road models for estimation. We implement this multi sensorbased, simultaneous estimation of road region and ego-motion using a particle filter. We also devise a technique for generating new particles to cope with gradual road type changes. The proposed method has been successfully applied to autonomous navigation in various road scenes.
View full abstract
-
Toyomi FUJITA
Article type: Article
Session ID: 2A1-E17
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper describes a visual function for a robot who engages an assemble task with partner robots cooperatively. In this study, we suppose a typical case in a cooperative work: one robot observes the hand movement of another robot who is going to manipulate an object, in order to recognize how it will grasp the target object. We present a basic method by which the observing robot can detect feature points on the hand of the manipulating robot from image sequences taken by a camera, calculate 3-D position of the hand from those feature points based on stereo vision principle, then detect movement of the hand. Harris operator was used for detecting feature points of the hand. Corresponding points are determined considering disparity smoothness constraint.
View full abstract
-
Tohru MORIYAMA
Article type: Article
Session ID: 2A1-E18
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
The length of the antennae of pill bugs was extended by wearing Teflon tubes on the tips of their antennae. Each individual in the test group (n=10) was placed on the top of a stairs, which consisted of five steps, and required to climb it down. Each distance between one step and the next one of the stairs was 5, 10, 15, 17 and 25mm in the order from the first to the fifth step. The median the maximum reachable step was 4. Additionally, the other two groups were tested in the same apparatus. In the free walk group (n=14), their antennae were also wore Teflon tubes. However, they were allowed to move freely in an arena for ten minutes before placing on the stairs. In the control group (n=14), their antennae were not covered with the tubes. The median of the maximum reachable step was 2 and 2.5 respectively. Statistical tests showed that the value for the test group was significantly larger than those of the free walk and the control groups respectively. This result suggests that pill bugs have distance perception with reference to the length of their antennae and could adjust it even when the length of the antennae was extended.
View full abstract
-
Wataru TAKANO, Dana KULIC, Yoshihiko NAKAMURA
Article type: Article
Session ID: 2A1-E19
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We propose a hierarchical model incorporating motion time series data, motion symbols and words. This paper describes a linguistic space which represents a network of the words linked to full body motions. The linguistic space is constructed by using the dissimilarity among words, which can be computed from the association probability of the words and motion symbols. In the linguistic space, words that are semantically similar are located close to one another and included in the same cluster. We validate our approach by constructing a motion symbol space based on the dissimilarity between words, which improves discrimination ability when compared to a motion space constructed based on dissimilarities between motions alone.
View full abstract
-
Atsushi FUKUDA, Koh HOSODA, Takeshi ANMA
Article type: Article
Session ID: 2A1-E20
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Haptic object recognition by a robot hand is essential for adapting to human environment because of its role in multimodal sensing and detecting affordance. An intrinsic difficulty of such recognition is due to its locality. Tactile information is local compared to vision, and slight difference in contact condition dramatically changes the sense. We hypothesize that the structure of the human hand is the key for avoiding the difficulty. The Bionic Hand with the adaptive design of the human hand, covered with soft skin with multiple tactile receptors, is developed. The hand performs robust haptic recognition through repetitive grasping by virtue of its adaptive design.
View full abstract
-
Hidenobu SUMIOKA, Yuichiro YOSHIKAWA, Minoru ASADA
Article type: Article
Session ID: 2A1-E21
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
The development of joint attention related actions, such as gaze following and gaze alternation, is one of mysteries in infant development. Previous synthetic studies have proposed learning methods for gaze following without any explicit instructions as first step to understand such development. However, a robot was given a priori knowledge about which pair of sensory information and action should be associated. In this paper, we propose a learning mechanism that automatically and iteratively acquires social behavior by detecting and reproducing the causality inherent in interaction with a caregiver without such knowledge. The measurement of causality based on transfer entropy [1] is used to detect appropriate pairs of variables for acquiring social actions. The reproduction of the detected causality promotes other causality. In the computer simulation of human-robot interaction, we examine what kinds of behavior related to joint attention can be acquired sequentially by changing the behaviors of caregiver agent. The result indicates the actions are acquired in similar order to infant development.
View full abstract
-
Hirotoshi KUNORI, Dongheui LEE, Yoshihiko NAKAMURA
Article type: Article
Session ID: 2A1-E22
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Since humanoid robots have similar body structures to humans, they are expected to perform tasks instead of humans. In order to successfully operate in daily life, they need to perform a variety of tool-use manipulation tasks. Understanding the relation between tool-usage and whole body motion is significant for tool-use manipulations. In this paper, we design a tool-use motion model which contains tool manipulation model and body motion model and we propose a method which enables robot to associate whole body motion with knowledge of tools by adopting the mimesis method from partial observation.
View full abstract
-
Mai HIKITA, Sawa FUKE, Masaki OGINO, Minoru ASADA
Article type: Article
Session ID: 2A1-E23
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Body representation is one of the most fundamental issues for physical agents (humans, primates, and robots) to perform various kinds of tasks. This paper proposes a method that constructs cross-modal body representation from vision, touch, and proprioception. Tactile sensation, when the robot touches something, triggers the construction process of the visual receptive field for body parts that can be found by visual attention based on saliency map. Simultaneously, proprioceptive information is associated with this visual receptive field to realize the cross-modal body representation. The computer simulation result comparable to the activities of parietal neurons found in monkey is given and future issues are discussed.
View full abstract
-
Ayako WATANABE, Masaki OGINO, Minoru ASADA
Article type: Article
Session ID: 2A1-E24
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper proposes a system that models the interaction in Applied Behavior Analysis(ABA). ABA is a education for Autism children to acquire the ability of communication, and is a kind of reinforcement learning. In the education, when an autism child looks at caregiver's face, then he/she can get a reward from the caregiver. By modeling the interaction and implementing the model to a robot, robot acuires the ability of looking at human face without initial knowledge about face.
View full abstract
-
Shinya OHKUBO, Katsuya YASUMOTO
Article type: Article
Session ID: 2A1-F01
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In this research, it is tried to measure the local birefringence of the sample quantitatively. In this paper, the scanning near-field ellipsometer system (SNEM) with an apertureless probe has been created. Moreover, the method of observing birefringent distribution and the measurement result of the sample (ethylene vinylacetate film and poly-imide rubbing film) has been described. From these results, this apparatus developed in this study promised to be valuable for anisotropic samples with microstructure and for applications in many fields.
View full abstract
-
Katsuya YASUMOTO, Shinya OHKUBO
Article type: Article
Session ID: 2A1-F02
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
The evaluation in various, minute areas is developed as the nanotechnology in recent years develops. Especially scanning near-field optical microscope (SNOM) can measure a sample in resolving power beyond the diffraction limited of the light. But there are the problems that resolving power is limited by influence of the opening of the probe tip to use. So, in this research, it is aimed that development of scanning near-field ellipsometry microscope (SNEM) which is not affected by the probe opening. In this paper, control software in SNEM had been created. As a result, the positioning of the probe and the angle control of the rotary analyzer were succeeded in.
View full abstract
-
Akio HIGO, Kazuhiro Takahashi, Muneki Nakada, Yoshiaki Nakano, Hiroyuk ...
Article type: Article
Session ID: 2A1-F03
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We have already reported the design and fabrication process of photonic MEMS actuators for optical attenuators integrated with a silicon photonic wire waveguide. This paper presents the tolerability of parylene against HF vapor and BPM TE mode simulation of silicon wire waveguide with parylene upper clad in order to combine conventional photonic IC technology.
View full abstract
-
Hiroya SHIMOHATA, Nobutaka TANAKA, Takuto MAEDA, Satoshi KONISHI
Article type: Article
Session ID: 2A1-F04
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper tries demonstrating a parallel linear actuator system providing a large stepping motion based on the concept of electrostatically controlled linear actuator system. A developed system employs a commercialized precision positioning stage as its driving oscillator, while the system used to use a precise piezoactuator. The actuator system provides a large step by using a large stroke of the precise positioning system. Electrodes for the electrostatic clutch mechanism were micromachined on the stage. The silicon slider achieves 50μm - 5mm step motion by using the principle of electrostatically controlled linear actuator system.
View full abstract
-
Makoto INADA, Daisuke HIRATSUKA, Junichi TATAMI, Shoji MARUO
Article type: Article
Session ID: 2A1-F05
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Three-dimensional (3-D) molding processes based on microstereolithography was proposed and developed. To make 3-D ceramic microstructures, a 3-D mold made from a photopolymer, which is covered with ceramic slurry, is thermally decomposed. In our method, high concentration slurry (67.9vol.%) containing submicron SiO2 particles is used for making high-density (99.7%) SiO_2 Glass. Pyrolysis process of the photopolymer was examined by TG-DTA measurement. The heating condition is optimized by using master decomposition curve theory. The optimal pyrolysis process made it possible to make 3-D ceramic microstructures such as a rotor model and a fullerene model. The further optimization of the pyrolysis process is needed for making crack-free ceramic 3-D microstructures with high fidelity.
View full abstract
-
Takuya HASEGAWA, Shoji MARUO
Article type: Article
Session ID: 2A1-F06
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
A replication technique of three-dimensional (3-D) microstructures produced by two-photon microstereolithography have been developed. In this technique, 3-D master model produced by two-photon microstereolithography is replicated using a PDMS mold. By adding thin membranes to a master model with closed loops, complex 3-D microstructures including as a table model and a spiral model can be reproduced. In addition, to replicate microstructures less than 10 micrometers, supercritical drying process was introduced. Since the surface tension while drying the microstructures can be removed, fragile 3-D master models with thin membranes were successfully fabricated. The membrane-assisted 3-D soft-molding technique enables to reproduce even freely movable microstructures. This 3-D microtransfer molding technique will be applied to mass production of micromachines for biochip technology.
View full abstract
-
Yui SAKUMA, Yoshitake AKIYAMA, Kikuo IWABUCHI, Yoshikatsu AKIYAMA, Mas ...
Article type: Article
Session ID: 2A1-F07
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We proposed a hybrid (biotic-abiotic) robotic system using a living component as a micro driving source. In our previous studies, rat primary cardiomyocytes have been employed as a mechanical component that can show autonomous beating by consuming chemical energy. Here, we utilized insect cells as an alternative cell source to develop an environmentally robust bioactuator, since they are tolerant over a wider range of culture conditions e.g. temperature and pH compared with mammal cells. In order to effectively utilize caridomyocyte beating as a driving force of bioactuator, mammalian cell sheets fabricated on temperature-responsive culture surfaces were utilized. These surfaces are cell-adhesive at 37℃ but reversibly change to non-cell adhesive upon temperature-reduction below 32℃. Since insect cells are routinely cultured at 25℃, we developed a novel temperature-responsive culture dishes and proposed a novel hybrid robotic system using insect cell sheets.
View full abstract
-
Hiroshi HORIGUCHI, Yoshitake AKIYAMA, Keisuke MORISHIMA
Article type: Article
Session ID: 2A1-F08
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Living cells have a variety of function with high performance and contract using chemical energy like oxygen and nutrition without electrical energy. We have proposed novel use of pulsating living cells as a driving source for micro bio-actuator. But, contractile force of single biological cell is not enough to actuate micro robot and mechanical system. So reconstruction of biological cells has been researched to generate high contractile force. Here we suggest muscle cells gel structures as the integration of muscle cells. The musclar cells gel structures are muscle cell culture in three dimensional gel structures utilized such as scaffold. In this paper, we fabricated prototype of a cardiomyocyte gel structures and confirmed it is possible to culture cardiomyocyte in the micro gel structures
View full abstract
-
Masao KABUTO, Takefumi KANDA, Koichi SUZUMORI
Article type: Article
Session ID: 2A1-F09
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In this study, the monolithic piezoelectric thin film actuator has been fabricated by a hydrothermal method using ultrasonic vibration mixing device. Compared with the conventional rotation mixing device in hydrothermal method, the ultrasonic vibration mixing device was effective for the generation of uniformity piezocrystallization. In addition, uniform micrometer order thick piezoelectric film was deposited on titanium substrate which had comparatively large and high-aspect ratio structure. The dimensions of the monolithic piezoelectric actuator ware 34mm in length, 43.6mm in width, and 2mm in depth. The maximum experimental displacement value was 0.1μm for X direction driving at the applied voltage of 0.25V_<p-p>.
View full abstract
-
Takayuki HOSHINO, Yuichi HORI, Tomohiro KONNO, Kazuhiko ISHIHARA, Keis ...
Article type: Article
Session ID: 2A1-F10
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In this study, we proposed to use the force generated by cell migration as a new drive force for self assembly and invented conveying mechanism by cells. We evaluated its assumption. First, micro movement of micro bead on the culturing cells on the substrate which was separated into cell adhesiveness area and nonadhesiveness area showed existence of driving force on the border line. Next, we manufactured the line & space patterned devise to drive a minim structure by patterned cells. As a result, this devise successfully drove a minim structure. Observed force driven by patterned cells could be driving force to realize self assembly using cells.
View full abstract
-
Shutaro SAITO, Yuichi KATOH, Hisashi KOKUBO, Masayoshi WATANABE, Shoji ...
Article type: Article
Session ID: 2A1-F11
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We developed a photosensitive ionic gel to make polymer microactuators driven at low voltage under atmosphere. A prototype of microactuator using the photosensitive ionic gel was produced and driven by applying voltage of below ±1.5V under atmosphere. The driving performance of the microactuator was examined experimentally. The displacement of the actuator was proportional to the square of input voltage. The amplitude of the displacement was exponentially decreased as the frequency was increased. The generative force of the actuator was proportional to the input voltage. We also found that the integration value of current was useful for precise position control. Finally, a microgripper using the ionic gel was developed.
View full abstract
-
Pou LIU, Zhan YANG, Masahiro NAKAJIMA, Toshio FUKUDA, Fumihito ARAI
Article type: Article
Session ID: 2A1-F12
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We report a novel technique for growing of nanofilm induced by electric current. A single carbon nanotube (CNT) as a conductive nanowire was suspended between two electrodes, and was supplied by direct currents. As the current flowed through the CNT, the tungsten nanofilm was deposited on the surface of the CNT using tungsten as precursors. The nanofilms were observed with transmission electron microscopy and its composition was analyzed with Energy Dispersive x-ray Spectrometry (EDS). The growth mechanisms are considered that the tungsten hexacarbonyl are break down by the resistance induced heat, and then the tungsten deposit on the carbon nanotubes.
View full abstract
-
Yuki TOMOE, Yoshitake AKIYAMA, Takayuki HOSHINO, Leiko TERADA, Keisuke ...
Article type: Article
Pages
_2A1-F13_1-_2A1-F13_4
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In this study, we developed a novel micro electrical stimulating device with three-dimensional (3D) carbon microstructures on a microchip, and stimulated to skeletal muscular cells cultured on the device. First we verified optimum conditions of fabricating 3D carbon microstructures on Pt wirings. Then 3D carbon microstructures on microchip with cell culturing microchamber fabricated by polydimethylsiloxane (PDMS) were fabricated. Cultured primary skeletal musclar cells which exenterated from newborn rats were cultured on the micro stimulation device, and electrical stimulation was applied to the grown up cells using the device.
View full abstract
-
Takeshi SASAKI, Hideki HASHIMOTO
Article type: Article
Session ID: 2A1-F14
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper, we present the component based implementation of observation function of smart environments, which are spaces with distributed and networked sensors and actuators. First we discuss the component design of the information acquisition function and the information integration function. The information acquisition part consists of sensor components and data processing components whereas the information integration part is composed of fusion components and database components. The components for sensors such as CCD cameras, laser range finders and a 3D ultrasonic positioning system, and for position and map servers are then implemented using RT (Robot Technology) middleware. The observation system is developed based on these modules.
View full abstract
-
Yasushi Mae, Kotaro Morikawa, Tomohito Takubo, Tatsuo Arai
Article type: Article
Session ID: 2A1-F15
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper we propose a real-time action indication system to human in emergency situations. Immediately after emergency situation occurs, people may be in panic and they cannot do appropriate actions to avoid danger. The proposed system gives instructions of appropriate actions to human based on recognition of human actions and situations by multiple sensors. The paper shows an experimental prototype system and experiments which simulates a situation immediately after the Earthquake Early Warning. In the experiments, a person is instructed to hide under a table based on recognition of human actions by a vision sensor.
View full abstract
-
Kazuhiro SHIMONOMURA, Tetsuya YAGI
Article type: Article
Session ID: 2A1-F16
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We describe an intelligent vision sensor module for a distributed visual computation. The vision sensor module, which consists of a silicon retina and an FPGA circuit, extracts visual features, such as edge and motion direction, from input images in real time. These outputs are collected by a host computer through Erthernet, and integrated in order to extract a global information. We consider applying the intelligent vision sensor network consisting of multiple modules to human tracking.
View full abstract
-
Ryuhei SAKURAI, Seong-Oh LEE, Tatsuya OKAMURA, Tatsuya NISHIZAWA, Joo- ...
Article type: Article
Session ID: 2A1-F17
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper proposes a method for multiple people tracking in Intelligent Space (iSpace). It is difficult to obtain one complete trajectory from human tracking with computer vision. In many cases, a trajectory splits into some fragments because of errors. To solve this problem, we integrated human tracking and face recognition. First, a simple human tracking algorithm for DINDs is proposed. Then, a face recognition is applied to provide probability of identity for each fragments produced by tracking result. This probability allows us to reconstruct one true trajectory from fragments.
View full abstract
-
Kota IRIE, Tatsuya OKUNO, Kazunori UMEDA
Article type: Article
Session ID: 2A1-F18
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper proposes improvement of a method to recognize waving hands using FFT. We constructed an intelligent room that has gesture recognition as a human-machine interface. Recognition of waving hands from images is a key technology of the system to detect an operator in the room. The current method cannot recognize slow waving hands. Therefore we increase the number of sampling frames and add the features to solve this problem. The proposed method is verified by simulations and experiments, and it was confirmed that the method can recognize even slow waving hands.
View full abstract
-
T. Mori, S. Odashima, H. Noguchi, M. Shimosaka, T. Sato
Article type: Article
Session ID: 2A1-F20
Published: June 06, 2008
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper, we propose a household object recognition system based on computer vision. In our system, detailed images of object are acquired with four pan-tilt cameras fixed at the ceil of the room. The object is identified from these images, and the object's position is managed by using the recognition result. Also, the system presents the position of the household objects by user's request. This paper presents a robust object recognition method for changes of object size, angle, direction in images by using SIFT feature and object template images captured from various directions. Our experiment with 41 objects showed that precision was 0.94, recall was 0.66, and F-measure was 0.75.
View full abstract