-
Noriyuki Kawarazaki, Takashi Shimizu, Tadashi Yoshidome
Article type: Article
Session ID: 1P1-G03
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper provides the study on the automated teller machine in consideration of the color universal design. Color universal design is an approach to the design of products and environments to be usable by visual disabilities, including persons with color blindness. We determine the color arrangement of the automated teller machine's display by means of the results of the questionnaire.
View full abstract
-
Shogo NAMATAME, Nobuto MATSUHIRA
Article type: Article
Session ID: 1P1-G04
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In the consideration of the coexistence of a service robot with people, it is desirable that the robot should estimate its position robustly in the environment without many improvements. The localization method has been developed using standard design in the environment such as floor design pattern and featured structure. In order to increase the robustness of the localization of a robot, the suitable measurement method is able to be selectable from the multiple measurement methods in the proposed algorithm for various environments. Here, the experiment in the daily life environment was carried out and the validity was shown.
View full abstract
-
Tadashi YOSHIDOME, Noriyuki KAWARAZAKI
Article type: Article
Session ID: 1P1-G05
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We presented the method to guide and control a mobile robot under the environment informationized by IC tags until now. In ROBOMEC2011, we presented the method to move a mobile robot to a destination while it moves to low potential by using potential field written in IC tags. This paper describes a method to guide a robot by using the composite potential field of fixed potential environment decided by a structure and modifiable potential environment decided by application. For example, the potential field can limit the territory of a robot in a big event site. The experiments show the effectiveness of this method.
View full abstract
-
Shinji KAWATSUMA, Ryuji MIMURA, Fumihiko KANAYAMA, Koji NAKAI, Hajime ...
Article type: Article
Session ID: 1P1-G06
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
JAEA reconstructed robots for the Fukushima diichi Nuclear Power Plants' accidents, in order to meet the confused situation by accidents. Many rubble were scattered and temporary cables and hoses were constructed in the reactor buildings, so that small robots like as reconnaissance robots should be conveyed by operators. JAEA unitized their small robot systems, in order that operators could convey and reassemble easily to reduce exposure dose.
View full abstract
-
Yu KAMIJI, Naoki AKAI, Koichi OZAKI, Chikara ITO, Ryutaro HINO
Article type: Article
Session ID: 1P1-G07
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Preliminary examination of image processing was conducted using existing reactor inside images through a fiber scope in order to confirm applicability of 3D-mapping in a reactor as a part of development of 3D-measurement technology. The Upright SURF (Speeded Up Robust Features) was used to find corresponding points between two captured images. In case of images showing many similar textures or lack of texture, it was difficult to find corresponding points using SURF only. By coupling with a canny algorithm to detect edges of the inside structure, it was found that 3-D structure could be measured for rectilinear objects.
View full abstract
-
Hiroka KANEI, Yusuke CHIBA, Yosuke SAITO, Hironao OKADA, Toshihiro KAM ...
Article type: Article
Session ID: 1P1-G08
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
ITF-1 "YUI" is the 1U size CubeSAT (100mm cube size satellite) developed by University of Tsukuba as the first challenge. It will be launched as a piggyback satellite of GPM mission (planned in early 2014). Even small satellites require self-controlling ability and fail-safe concept in order to work properly in the outer space severe environment. Learning from failure reports of other CubeSATs, we include various idea to gain more robustness toward the successful mission achievement. In this paper, among many design efforts, the mutual computer watching architecture against the failure due to cosmic-lay irradiation, and the newly developed antenna system for secure communication are reported.
View full abstract
-
Yutaka KOMETANI, Kenichi OOTANI, Naoya EZAWA, Taizou KINOSHITA, Syuuzo ...
Article type: Article
Session ID: 1P1-G09
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We are developing a monitoring system with remote controlled robots and multiple robots routing wireless communication to each other, applied to the large-scale disaster scenes. Robots are equipped with a laser rangefinder, which can detect the self position at the disaster scene.
View full abstract
-
Yutaka KOMETANI, Hiroshi ENDO, Takashi SEKIGAMI, Akira MIZUOCHI, Hisas ...
Article type: Article
Session ID: 1P1-G10
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We have developed dual-arm heavy duty type robot system (ASTACO-SoRa) to remove debris or damaged structures. ASTACO-SoRa mounts 6 cameras and various sensors to measure radiation level, to enable remotely operated restoration work under radio active the large-scale disaster scenes.
View full abstract
-
Taro SUZUKI, Nobuaki KUBO
Article type: Article
Session ID: 1P1-H01
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper describes an evaluation of real time kinematic (RTK) GPS positioning using a single frequency global navigation satellite system (GNSS) receiver for outdoor mobile robots. Conventional RTK-GPS techniques require a double frequency GNSS receiver to solve an ambiguity of the GNSS carrier phase observation. In this paper, we use the single frequency GNSS receiver to calculate precise locations for the control of the mobile robot. We use the multiple GNSS observations of L1 signal to solve the integer ambiguities of carrier phase observations. From the evaluation test, we concluded the proposed technique is effective and useful to estimate the precise location of the mobile robot compared with the conventional technique.
View full abstract
-
Ken WATANABE, Teppei OTA, Mitsunori KITAMURA, Yoshiharu AMANO, Takumi ...
Article type: Article
Session ID: 1P1-H02
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper describes about the attitude measuring method for a small unmanned aerial vehicle (UAV) using GPS-gyro combined with inertial measurement unit (IMU) by EKF. A small UAV attracts people's attention as the effective means for collecting aerial information. However the conventional attitude measuring methods have some issues, which cannot measure the attitude angle everywhere easily and absolutely against the earth. GPS observables are interpolated with cubic spline to do time synchronization among three GPS receivers (Master, Slave1, and Slave2). Using the attitude angle estimated by EKF, the correct solution is searched from among the candidates of integer ambiguity every epoch. The result of a field experiment shows the proposed method is effective to estimate attitude for the small UAV.
View full abstract
-
Takayuki YOKOTA, Yoji KURODA
Article type: Article
Session ID: 1P1-H03
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper, we propose a method for travelable area detection in an urban environments using a LRF. Non-travel in the urban environment can be divided into roughly, change in the height and different materials. Our method using a laser range finder to get the reflection intensity and height information of the obstacles. The system recognizes using appropriate methods to observations obtained. Non-travel detection is edge points detection by the second order derivative using height information, and the classification by the machine learning using the reflection intensity. Afterwards, we construct a local environment map of the robot from detected results. The effectiveness of the proposed method is proved through the experiments results in outdoor environments.
View full abstract
-
Mitsunori Kitamura, Akira Watanabe, Yoshiharu Amano, Takumi Hashizume
Article type: Article
Session ID: 1P1-H04
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper describes a positioning method by GNSS/IMU integration using tightly coupled kalman filter and LEX signal. Tightly coupled kalman filter uses pseudorange and doppler frequency for observation update. Therefore, if a vehicle is in narrow sky environment, this filter could update states by using observation. LEX signal is GPS and QZSS performance enhancement function emitted from QZSS satellite. Therefore, it is expected that positioning accuracy is improved by using LEX signal. In this paper, the proposed method is evaluated on the course of Real World Robot Challenge at post processing.
View full abstract
-
Mitsunori KITAMURA, Tomohiro TAKESHITA, Masamitsu ONISHI, Yoshiharu AM ...
Article type: Article
Session ID: 1P1-H05
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
The need of high precision location information has been increasing. Therefore, correcting the multipath error is needed. Especially, the multipath caused by trees are said to be difficult to correct. In this paper, we propose a new index called foliage parameter. This index is made by measuring the tree with the 3D laser scanner and shows the density distribution of the tree by the amount of leaves. By looking the multipath error and the foliage parameter, we study the correlation between them and plan to use it to correcting the multipath errors caused by trees.
View full abstract
-
Takahiro KIMURA, Takafumi KATSUYAMA, Kentaro TAKEMURA, Jun TAKAMATSU, ...
Article type: Article
Session ID: 1P1-H06
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper, we propose a method for annotating type of feasible human activities on a wide area from the data observed by a robot. For realizing human-robot symbiosis, it is essential to combine semantic information with an environmental map. Unlike prior methods, the proposed method uses an RGB-D sensor to directly observe human activity, and also uses a range sensor to obtain an environmental shape information. We use the K-means algorithm to categorize the observed human motions. We actually show the result of mapping the activities on 2D occupancy grid map, and discuss the effectiveness of the proposed method.
View full abstract
-
Junji EGUCHI, Koichi OZAKI
Article type: Article
Session ID: 1P1-H07
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper describes a study of evaluation method for localization by using scan-matching. In outdoor environment such as Tsukuba Challenge 2012, it is accomplished that high-accuracy localization by using a laser scanner scan-matching with occupancy grid maps. However, mis-matching sometimes occurred, and a robot lost its position. To prevent the position error caused by mis-matching, the authors evaluate the scan-matching by two values. One is static matching rate which indicates the aptitude of the environment such as the number of landmarks. The other is dynamic matching rate which indicates the possibility of mis-matching in crowd. In this paper, the evaluation method is discussed in some public area in Tsukuba Challenge 2012 and the availability is shown.
View full abstract
-
Satoshi ASHIZAWA, Ryunosuke IWATA, Michio YAMASHITA, Tomoya OOWAKI, Ta ...
Article type: Article
Session ID: 1P1-H08
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In order to drive the mobile robot in outdoor, Driving environment as loads was investigated. And, proposed laying rules was evaluated from the analysis of a real environment adaptation. By laying a landmark in accordance with the laying rules, simulation showed that the robot is capable of traveling trajectory.
View full abstract
-
Akihiro ICHIMURA, Ikuo MIZUUCHI
Article type: Article
Session ID: 1P1-H09
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We aim at a robot that searches and memorizes places of domestic objects and finds them again. It is necessary for the robot to manipulate an environment such as opening a drawer for reaching objects such as hidden objects in the drawer. We think that the location of the robot should be represented not based on Cartesian coordinates but based on the visual features, because the robot's action to find hidden objects varies by the state of each environment. The robot will reach the place of the memorized object by the action based on the integrated memory of movement and manipulation when the robot finds the object. This paper describes the method of the integrated memory of robot's movement and manipulation, and the action to reach the hidden object based on the memory.
View full abstract
-
Yuki SETO, Akira FUJIWARA, Takeshi IKEDA, Motoji YAMAMOTO
Article type: Article
Session ID: 1P1-H10
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
3D map is useful for planning or traveling of mobile robots. Laser Range Finder (LRF) is often used for mapping. However, when the irradiated laser from LRF hits a plane in too narrow angle, the sensor cannot output the distance information. It caused by the insufficiency of the quantity of catoptric laser. Consequently, there are some lines have data loss, and it makes the "Data loss area"in the 3D map. If this Data loss area is on floor, it may disturb the middle-range navigation of mobile robots. This paper proposes a method of 3D mapping by using LRF and a RGB-D sensor. This method estimates the existence of floor in Data loss area by using RGB-D sensor's color information, and the range data of estimated areas are interpolated as a straight line. It can output the 3D map without Data loss area on the floor.
View full abstract
-
Shogo ARAKAWA, Eijiro TAKEUCHI, Kazunori OHNO, Satoshi TADOKORO
Article type: Article
Session ID: 1P1-I01
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper describes a localization method for ground vehicle in urban environments using GPS. To reject GPS multipath error, the method discerns GPS satellites visibility and removes signals from invisible GPS satellites. In order to determine invisible satellite, the method uses Aerial survey Maps. GPS antenna position on the 3D-Maps is estimated by employing a particle filter. To Applying this method for ground vehicle, we developed measurement vehicle.
View full abstract
-
Kazuki MATSUMOTO, Yuichi TADUKE, Chisato KANAMORI
Article type: Article
Session ID: 1P1-I02
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper proposes a navigation method with Indoor MEssaging System (IMES) for the service robot which moves indoors. IMES is expected to be promising indoor positioning systems. IMES is the technology developed for the applications of Personal Digital Assistants, such as a cellular phone, a smartphone and a tablet. IMES consists of the transmitters which transmit position information by GPS compatible signal and GPS receivers. The contents of the message are latitude, longitude, altitude and floor number. The existing GPS receivers can receive IMES signals by changing of their firmware. The purpose of this research is development of the component technology for applying IMES technology to mobile robots. In this report, the experimental results of the signal propagation basic characteristic (directivity and receiving intensity) of IMES, the development of IMES Guided mobile Robot (IGR) and the indoor navigation experimental results are described.
View full abstract
-
Nobuya OKADA, Satoshi SUZUKI, Takahiro ISHII, Yohei FUJISAWA, Kojiro I ...
Article type: Article
Session ID: 1P1-I03
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper consists of two parts. The first part is environmental map construction algorithm with RGB-D camera. 3D indoor environmental map is generated by feature based alignment. RANdom SAmple Consensus(RANSAC) is used to obtain the alignment between point clouds. Lastly, experiment is performed using RGB-D camera, and the computational cost of the map construction algorithm is evaluated. The second part is selecting shape feature. Two shape descriptors are evaluated about computational costs and errors, and either descriptor is selected to recognize landmarks.
View full abstract
-
Shuichi MAKI, Kohsei MATSUMOTO, Ryoso MASAKI
Article type: Article
Session ID: 1P1-I04
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We describe hardware component implementation of position and orientation estimation function which is indispensable for autonomous mobile robots. And we evaluate accuracy of position estimation result. In order to design the control system of an autonomous mobile robot, it is required to evaluate accuracy of position and orientation estimation results. In this paper, we describe evaluation method of accuracy of position estimation result applied trilateration. And show that an error from reference point is less than ±20mm in the resting state.
View full abstract
-
Ayumu YAMAKAWA, Akira AOKI, Satoru SASAKI, Susumu TARAO
Article type: Article
Session ID: 1P1-I05
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Self-localization is one of the most important process for autonomous mobile robots. Map-based self-localization is a promising approach to achieve a precise self-localization. This method requires that the robot recognizes geometric surrounding environment accurately and obtains the correct position and attitude from the information based on the geometric data. To realize this process appropriately, we propose a method of self-localization using a combination of a 2D map and a 3D map with consideration for computational cost. Our method mainly consists of our own 3D range finding system, the Iterative Closest Point algorithm, and the Monte Calro Localization algorithm. This paper presents the process of creating the 3D map based on the 2D map created beforehand, the process of self-localizing through 2D map and 3D map, and the preliminary experiments of a series of those processes application.
View full abstract
-
Kohe Nagamatsu, Xiaolin ZHANG
Article type: Article
Session ID: 1P1-I06
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In case nonstop moving autonomous mobile robot cannot take plural landmark at a time, using triangulation as self-localization is inappropriate. Because when the robot measures a distance to the landmark and a direction to the landmark, positional relation between the robot and the other landmark get out of shape. This study proposes real-time self-localization method for the robot using stereo pan-tilt camera and is verified an efficacy of the method.
View full abstract
-
Ankit A. Ravankar, Yohei Hoshino, Takanori Emaru, Yukinori Kobayashi
Article type: Article
Session ID: 1P1-I07
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper, we propose a new algorithm based on Hough Transform for making 2D maps in an indoor environment. The proposed method works in two stages. The first stage involves applying clustering on laser range sensor data. The second step involves applying Hough Transform on the clustered data. We show that the proposed method works efficiently in noisy environment and generates accurate maps.
View full abstract
-
Yousuke FUJIUCHI, Hiroyuki KOBAYASHI
Article type: Article
Session ID: 1P1-I08
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Self-localization is one of the most important problems for autonomous mobile robots. The authors propose a novel selflocalization method for mobile robots by using two-dimentional code. In this method, firstly a robot takes a picture of two-dimentional code and than, it estimates its global location using skewness of the shape of code and embedded infomation in the code. The proposed method has an advantege of "self-containedness", which means that it doesn't require any internal or external a priori databases. The authors implement the proposed method on a mobile robot and perform preliminary experiments to confirm its validity.
View full abstract
-
Takato SAITO, Yuya NAGATA, Kentaro KIUCHI, Masanobu SAITO, Takayuki YO ...
Article type: Article
Session ID: 1P1-J01
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper, we propose an autonomous navigation system that do not need detailed preinformation. To achieve robust traversability, our system adopts following technologies. The system involves a localization with GPS and dead reckoning. We integrate it by using Divided Difference Filter (DDF). Additionally our system also estimates traversability of road regions using a laser range finder (LRF). Our local map is made in consideration of pedestrians by a moving obstacles detection. The effectiveness of the proposed system is proved through some experiments.
View full abstract
-
Naoaki KONDO, Shuji OISHI, Yumi IWASHITA, Ryo KURAZUME, Tsutomu HASEGA ...
Article type: Article
Session ID: 1P1-J02
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper proposes a novel object recognition technique using a laser scanner. A laser scanner precisely measures three dimensional shape of a target, and object recognition can be conducted by comparing shapes of objects. However, it is difficult to distinguish objects that have similar shapes, such as a remote control and a cellphone, based only on three dimensional data. On the other hand, a laser scanner can obtain intensity of a laser pulse as a by-product of range data. Reflectance image, which is a collection of intensity data, has appearance information of a target object. We developed a novel object recognition technique using range and reflectance image simultaneously, and realized road traffic census by applying the proposed technique.
View full abstract
-
Kimitoshi YAMAZAKI, Kiyohiro Sogen, Takashi Yamamoto, Masayuki INABA
Article type: Article
Session ID: 1P1-J03
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper describes a modeling method that enables to generate 3D shape model with articulated link. Furniture such as refrigerator, shelf and cabinet are targeted in this research. The method is based on the tracking of 3D surfaces extracted by using depth images captured from a range camera. Both exterior and internal sides of an articlated part can be modeled as a 3D mesh model. The position of articulated link can also be estimated.
View full abstract
-
Shinta NOZAKI, Yuki UCHIDA, Kazunori UMEDA
Article type: Article
Session ID: 1P1-J04
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper, we propose a method for expanding the measurement range of the range image sensor using multi-spot lights. This sensor can obtain range and color images at 200Hz. Measurement range of the original sensor is 850〜2500[mm]. It is necessary to solve the correspondence problem in order to expand the measurement range. The area of the spot image changes according to the distance. We use the area of the spot image to solve the correspondence problem. The effectiveness of the proposed method is verified by experiments.
View full abstract
-
Yoshiaki OTA, Takuma KASAKI, Gentiane VENTURE
Article type: Article
Session ID: 1P1-J05
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In the field of biology, behavior analysis a valuable tool for observing the change and development of the brain and neural mechanism non-invasively. However, behavior analysis has enormous data processing because of lack in automatic tools for the measurement of behavioral data. Therefore, the purpose of this study is to develop the new motion capture system which enables us to improve processing a vast amount of tracking data about chicks behavior. Specifically, we propose automatic motion capture system obtaining the position and direction of chicks by using video movie from only WEB camera before and after fluttering of chicks.
View full abstract
-
Shota MATSUZAKI, Hiromasa OKU, Masatoshi ISHIKAWA
Article type: Article
Session ID: 1P1-J06
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
There have been a number of researches done on methods for computing three-dimensional information from images taken with different focal settings. However, due to the bottlenecks in the speed of both optical system and algorithm speed, these methods are rarely employed in a case where the scene is dynamic. This paper describes new computationally inexpensive method for estimating three-dimensional motion of feature points. The method computes optical flow between two sets of images where the focal plane of each image in a set is different. Its validity in real world case is demonstrated on an image sequence that is captured at 240 fps using high speed liquid lens, called Dynamorph Lens, allowing each image to be captured at different focus.
View full abstract
-
Kohei FUJIWARA, Ryohei Chino, Teppei OTA, Kiichiro ISHIKAWA, Yoshiharu ...
Article type: Article
Session ID: 1P1-J07
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We developed portable 3D measurement device. It is available where cars cannot run and GPS is not available. This device consists of camera, laser scanner and PC for data acquisition. Using this devise we propose a method combining camera data and scan data. First, the method calculates standardizes a self-position with Structure from Motion (SFM) and reflects the data which a laser scanner acquired in the three-dimensional space. Then we correct unevenness of the point cloud with Principal Component Analysis (PCA). We conducted evaluation test. The evaluation test verified that we can comprehend rough indoor and outdoor shape reconstruction result by 3D measurement device.
View full abstract
-
Takehiro KAWASHITA, Masatoshi SHIBATA, Toru UBUKATA, Makoto ARIE, Kenj ...
Article type: Article
Session ID: 1P1-J08
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper, a tracking system based on Particle Filter is presented. Color and distance information are used. Distance information are obtained by "Subtraction Stereo", which restricts stereo matching to foreground regions extracted by subtraction. The effectiveness of the proposed system is verified by experiments.
View full abstract
-
Naruyuki HISATSUKA, Ippei Samejima, Hiroshi TAKEMURA, Makiko KOUCHI, S ...
Article type: Article
Session ID: 1P1-K01
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
It is important for robots to acquire the information of a person's grasped objects when robots assist or communicate with the person. Kinect can extract the image of a person and his/her grasped objects, therefore to extract the image of the grasped objects is the problem to be solved. To perform image extraction for such subjects, it had been proposed methods using Image Feature Extraction and Machine Learning. However, these require a huge learning image data. Therefore, this study proposes the method with Kinect by using Body Dimension Database. By performing the Multiple Regression Analysis, to obtain wrist position, We estimated arm lengths, which is unable to measure adequately with Kinect, with human body measurements which is able to measure adequately with Kinect. And we acquired the information of a person's grasped objects by extracting the image of hand grasped objects.
View full abstract
-
Kazuki SAKAMOTO, Alessandro MORO, Takaaki SATO, Toru KANEKO, Atsushi Y ...
Article type: Article
Session ID: 1P1-K02
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Sensing in aquatic environment is important to maintain underwater structures and research aquatic lives and resources. This paper proposes a method for measuring the 3D coordinates and the shape of objects in water using stereo fisheye camera and a projector. By projecting light on an object from a projector, lattice pattern is created on the surface of the object in water. Corresponding grid points are detected on the image of two fisheye cameras, and 3D coordinates are calculated using the principle of triangulation. Experiment results show the effectiveness of the proposed method.
View full abstract
-
Yuya Yumoto, Ikuo Mizuuchi
Article type: Article
Session ID: 1P1-K03
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper describes an identification method of fruits to support cultivation management according to individual fruits. Identification of fruits enables us to gather information for each fruit. Gathering information of fruits in growth period enhances the value of them. It can guarantee quality of them and it enables customers to select a fruit depending on its preferences. In addition, it enables producers to improve cultivation management of them. We propose a method to identify fruits depending on the recognition of 3-dimensional branch structure. We describe a method to obtain 3-dimensional branch structure from the point cloud of a tree.
View full abstract
-
Morihiko YOSHIDA, Atsushi WATANABE, Akihisa OHYA
Article type: Article
Session ID: 1P1-K04
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper describes the new scanning method of 3D LIDAR (Light Detection and Ranging) for high speed measurement with small blind area by using unlimitedly rotating 2D LIDAR. In scanning methods of 3D LIDAR with rotating 2D LIDAR, wiring connection of power supply and signal transmission have been a significant problem. In this paper, the authors propose the 3D LIDAR by using unlimitedly rotating 2D LIDAR by supplying power through rotating shaft and transmitting modulated carrier signal on power supply line. Also, the detailed designof a prototype of proposing 3D LIDAR and experimental result are shown.
View full abstract
-
Hiroki YAMASHITA, Hiroshi KOBAYASHI
Article type: Article
Session ID: 1P1-K05
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We have been developing the lightweight and cheap three-dimensional archiving equipment and its software for measuring the volume of a steel scrap. This equipment consists of a laser range sensor, a pan-tilt stand, and a web camera, and could scan three-dimensional geometry by controlling these three elements by a notebook PC, and calculate volume form the scan of data. By using this equipment, dangerous steel scrap measurement work in steelworks can be done in a safe and simple thing at low cost. In addition, it also becomes possible to save the data which visualized the result of measurement.
View full abstract
-
Naoto Noguchi, Ken Nakanishi, Guenho Lee, Nak Young Chong
Article type: Article
Session ID: 1P1-K06
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper, we present the design and realization of a single motor-driven 3D sensor/positioner system, allowing 2D range sensors to measure the distance to objects or cameras to take pictures their surrounding in all directions of 3D space. Specifically, based on a simple screw-link mechanism, the proposed positioner features extended visibility along the axis of the screw as well as hemisphere scanning. The implementation details are explained, and the operation mechanism of the positioner is verified through experiments.
View full abstract
-
Shinnosuke IDETA, Masahiro TANAKA, Ryuji UEDA, Kimihiro NISHIO
Article type: Article
Session ID: 1P1-L01
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We proposed and fabricated the circuit and systems for tracking the target based on the biological vision and auditory systems. By using the circuit based on the auditory system, the proposed system can capture the target. By using the circuit based on the vision system, the proposed system can track the moving object. The proposed circuit became simple structure. The test circuits were fabricated with discrete metal oxide semiconductor (MOS) transistors on the breadboard. The measured results of the test circuits showed that the circuit based on the vision system can detect the movement of the object and the circuit based on the auditory system can detect the position of the sound sources. The measured results of the fabricated system by using the test circuits showed that the proposed system can operate normally.
View full abstract
-
Yuki OKADA, Jun MIURA, Igi Ardiyanto, Junji SATAKE
Article type: Article
Session ID: 1P1-L02
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper describes a map generation method for recognizing an indoor scene by using multiple laser range finders (LRFs) to prevent the robot from colliding with obstacles and falling. Many of existing methods use only horizontal-scanning LRFs for mapping, but they cannot recognize small bumps and downstairs. We therefore propose a method of generating a map by using multiple 2D LRF pointing at different directions, by which enough information for a safe robot navigation is provided. One horizontally-scanning LRF measures the distance to surrounding walls, while two downward-looking LRFs detect bumps based on a k-curvature based corner detection in LRF scan data and downstairs based on the deviation from the predicted floor height. Detected bumps and downstairs are added to the usual 2D map generated by using a horizontal-scanning LRF. We validated the proposed method both in an environmental simulator for robotics and in an actual indoor environment.
View full abstract
-
Kenji KOIDE, Jun MIURA, Junji SATAKE
Article type: Article
Session ID: 1P1-L03
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper describes an attendant service robot using RT Middleware. We use laser range finders(LRFs) for human detection and tracking. Legs of people are first detected in LRF data and then temporally integrated with a linear motion model in order to estimate and predict the people positions. For each person, we compare his/her predicted position with detected leg positions, and choose the one with the shortest distance as the person's leg and update the position. We set three target positions as both sides and the back of the predicted target person position, and the robot usually uses one of the side positions and switches to the back position if it is necessary to avoid collisions, thereby realizing a safe following behavior. We developed a user interface that is composed of a web browser-based program and a server program running on the robot, by which the user can control the robot manually or switch the control modes with a smartphone.
View full abstract
-
Takuro EGAWA, Ippei SAMEJIMA, Yuma NIHEI, Satoshi KAGAMI, Hiroshi MIZO ...
Article type: Article
Session ID: 1P1-L04
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
It is important for service robots to recognize human activity. A table is the center of human life. Objects on the table are changed by human. Therefore service robots need to find objects on the table and detect changes. This paper describes an observation system of tabletop objects. Our system includes a perceptual pipeline of RGB-D point cloud and a planning of the location to observe a table. In addition, the location of objects, and their identities are recorded in the database. This system is useful for any application that involves dealing with objects, including grasping, change detection, and object search. We demonstrate a robot equipped with a Microsoft Kinect RGB-D sensor and a Velodyne HDL-32E Lidar sensor build on our system.
View full abstract
-
Tomu AKETO, Yasuhito NAKATANI, Shohei SAKAI, Xiaolin ZHANG
Article type: Article
Session ID: 1P1-L05
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Stereo cameras are standard for robot vision because they give robots both visual information and capability of calculating the distance towards object; however, conventional systems have a problem which cannot see clear vision owing to blurring of robots or target objects. On the other hand, we human can obtain stabilized visual information although head and body dynamically move in our daily life. Our group focused on such a human's stabilized retinal information processing, and then has been developing a binocular vision control system. In previous studies, our vision system has fundamental eye movements; however, a region of these movements on the system is narrow because the system can move only eye movements range. In this paper, we propose a control model with neck torsion which gives a wide region of binocular movement.
View full abstract
-
Kazuaki YAMADA
Article type: Article
Session ID: 1P1-M01
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper proposes a new predictive-control method by using recurrent neural networks (RNN) and fuzzy rules. We apply the proposed method to a fly ball catch problem. A robot can predict the trajectory of a ball by using RNN, and catch a ball by fuzzy rules. This paper verifies the difference in the predicted performance by changing a combination of a network configuration of RNN and a node function of RNN.
View full abstract
-
Ali ALALWAN, Hisham ALSUBEHEEN, Daisuke URAGAMI, Akinori SEKIGUCHI, Yo ...
Article type: Article
Session ID: 1P1-M02
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Symmetric cognitive biases are human inferences that induce "q given p" from condition "p given q". LS model represents these human tendencies in human-like decision-making, which had been proven a great performance in the n-armed bandit problem. In the previous study, we proved it effective in robot motion learning as well by simulation. In this study, we verified the effectiveness by using a real giant-swing robot which we made.
View full abstract
-
Takuya NAKAYAMA, Akihiko YAMAGUCHI, Jun TAKAMATSU, Tsukasa OGASAWARA
Article type: Article
Session ID: 1P1-M03
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Shortening the time to learn some motions improves a robot's applicability. Our aim is to improve the learning performance by using tactile information. In this paper, we evaluate the effect of tactile information in the reinforcement learning, by comparing learning using tactile or not about seating task.
View full abstract
-
Manabu GOUKO, Yuichi KOBAYASHI, Chyon Hae KIM
Article type: Article
Session ID: 1P1-M04
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper, the active perception model that had been proposed by the authors previously was applied to the feature extraction from single object. In a simulation, a mobile robot acquired feature classes according to the shapes of the parts of the object.
View full abstract
-
Yukifumi NARUSE, Michiko WATANABE, Ikuo SUZUKI, Kenji IWADATE
Article type: Article
Session ID: 1P1-M05
Published: May 22, 2013
Released on J-STAGE: June 19, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Many studies have been done on autonomous locomotion robots based on physics modeling. However, most of them concern on walking and swimming locomotion and there are few studies on a series of walking locomotion for a big living thing. In this study, we focus on a series of walking locomotion using dinosaur model (T-Rex). We applied ANN, GA and CPG to acquire the walking behavior based on physics modeling and verified that a suitable behavior can be acquired through numerical experiments.
View full abstract