ITE Technical Report
Online ISSN : 2424-1970
Print ISSN : 1342-6893
ISSN-L : 1342-6893
38.9
Displaying 1-34 of 34 articles from this issue
  • Article type: Cover
    Pages Cover1-
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (10K)
  • Article type: Index
    Pages Toc1-
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (90K)
  • Article type: Bibliography
    Pages Misc1-
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (121K)
  • Sota SUZUKI, Haruto SUZUKI, Mie SATO
    Article type: Article
    Session ID: ME2014-28
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this study, we focused on grabbing that is one of basic movements in interaction between a hand and a virtual object in AR. By using a depth camera, not markers or a glove, we obtained the three-dimensional coordinates of the finger-tips that are important to grab a virtual object naturally. Furthermore, based on the three-dimensional coordinates of the hand, we enabled the hidden surface removal which expresses the positional relationship between the hand and the virtual object. As a result, our AR system realized grabbing a virtual object with a sense of reality. In addition, we conducted an impression evaluation experiment to evaluate our AR system.
    Download PDF (8406K)
  • Atsushi TAKAHASHI, Mie SATO
    Article type: Article
    Session ID: ME2014-29
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Recently, many studies have been carried out to construct a pseudo 3D display using a 2D display. In addition, Perceptual User Interface (PUI) whose input is user's voice and/or gestures has attracted attention as a new user interface from Graphical User Interface (GUI) and Character User Interface (CUI). In this study, we construct a 3D display using motion parallax, and with the 3D display we perform interactive image projection that provides users with more intuitive operations. The usefulness of our 3D display with PUI is examined by applying it to a game.
    Download PDF (644K)
  • Hayato IIDA, Mie SATO
    Article type: Article
    Session ID: ME2014-30
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Immersive image projection has been studied that gives an immersive feeling by projecting images around a viewer. In the preceding study, surrounding image projection environment was built with two projectors attached with fisheye lenses. However, the preceding study restricted the projection environment and used an approximate correction method for distortion from the fisheye lenses. In this paper, we decrease the restriction of the projection environment and propose a correction method based on properties of the fisheye lens.
    Download PDF (10534K)
  • Misae YAGI, Mie SATO
    Article type: Article
    Session ID: ME2014-31
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Recently, studies to display 3D images including the depth are carried out. One of these studies is on the parallax barrier method. It provides an autostereoscopic display, but there is the problem that the viewable position is limited. In addition, stereoscopic displays do not consider individual differences in distance between the both eyes, which may cause visual fatigue by giving the observer an excessive three-dimensionality. Therefore, we detect the position of the observer by using a depth sensor, and display 3D images matched at that position. In addition, we propose an autostereoscopic display that adds an interaction to change a degree of the three-dimensionality.
    Download PDF (7041K)
  • Yuichiro OSATO, Takuya TAKANOHASHI, Naoki HASHIMOTO, Mie SATO
    Article type: Article
    Session ID: ME2014-32
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In recent years, object recognition from a camera image has been used for various purposes. The conventional object recognition based on the intensity information has the problem that the recognition rate is lowered by illumination variation. This study conducts object recognition based on the depth information. We apply a template matching method using a depth sensor. The depth values of a 3D object as a template is searched in the depth values of a target area. The usefulness of the object recognition using the depth information is examined as compared with the conventional object recognition.
    Download PDF (4189K)
  • Takumi SOTOME, Michimi INOUE, Mie SATO, Miyoshi AYAMA, Naoki HASHIMOTO
    Article type: Article
    Session ID: ME2014-33
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this study, we examined relationships between gradation numbers and impressions based on impression assessments. Results showed that "like," "smooth" and "rough" were the assessment words that have strong influence on the impression assessments. Moreover, we obtained the result that some impressions improved with an increase of the gradation number on the area where the gradation changed gradually.
    Download PDF (4254K)
  • Daisuke KOBAYASHI, Naoki HASHIMOTO
    Article type: Article
    Session ID: ME2014-34
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Recently, spatial augmented reality (SAR) have been used as a method of editing appearance of real objects by using a projector. In general SAR approaches, stationary objects measured beforehand are often used, and pose estimation is required in case of moving objects. However, attaching motion sensors give artifact to their appearance, and embedding sensors restrict target objects. And also, a pose estimation method with a RGB camera does not work because of projected light. Therefore, in this research, we propose a SAR system for dynamic objects by using previously measured 3D shape and dynamic point cloud data with a depth sensor. This method can achieve real-time and high precision position tracking, and provide SAR effects even on moving objects.
    Download PDF (13887K)
  • Akane TASHIRO, Hisanori SAITO, Satoshi OGAWA, Naoki HASHIMOTO
    Article type: Article
    Session ID: ME2014-35
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Recently, in virtual reality and projection mapping, we often project images on arbitrary surfaces or complex shaped objects. However, projected images are blurred by projecting on non-flat objects at a slant. And also projection targets should be static because projector's focal length is generally fixed during projection. Therefore, in this research, we propose a method of deblurring images projected on dynamic objects. We previously build an out-of-focus blur model according to projecting distance. And then, we use it for dynamic deblurring on moving objects by using the depth sensor.
    Download PDF (12498K)
  • Yusuke ORIMO, Yuji TAMAKUNI, Daisuke TAKAHASHI, Noriyoshi OKAMOTO
    Article type: Article
    Session ID: ME2014-36
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, the method of finger spelling recognition in a sign language by Kinect is proposed. Previous researches of finger spelling recognition were difficult for correct judgment of occlusion and accurate extraction of depth value. By applying a proposed method on inexpensive and compact Kinect, construction of the small-scale finger spelling recognition system will be able to be expected.
    Download PDF (4250K)
  • Takafumi WATANUKI, Masahide KANEKO
    Article type: Article
    Session ID: ME2014-37
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper proposes a hand joining potential based control method for an autonomous mobile robot which accompanies a user. A hand joining potential is designed by optimizing an artificial potential method to accompanying movement. A robot moves side-by-side with a user in the normal situation, and changes to a cascade position for avoiding obstacles. Attractive potentials are set on both left and right sides of an accompanying person and their locations are changed according to the position of an accompanying person and conditions of surroundings. Using the proposed method, the mobile robot can autonomously changes parallel and cascade positions with a user without resetting the parameters under complex environments. The simulation results are shown to prove the efficiency of the proposed method, in which the collision with an oncoming person or a passing person is avoided by the robot naturally.
    Download PDF (866K)
  • Tsubasa KUROSAKA, Masahide KANEKO
    Article type: Article
    Session ID: ME2014-38
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper proposes a speed control method to move an autonomous mobile robot safely under the environment where blind areas exist. Sensors like LRF or camera cannot detect the areas in the backside of walls, pillars or objects. When a person suddenly appears from these blind areas, the mobile robot is very likely to collide with him/her, if its speed reduction performance is limited. In this paper the safety speed of robot is gravely controlled using the distance information between the robot and the blind areas as well as obstacles. Meanwhile, the optimum path is generated to minimize the cost of moving time, considering the lengthened time cost caused by speed control process. The effectiveness of the proposed method is confirmed by both the simulation and the experiment using a real robot.
    Download PDF (1147K)
  • Chihiro IZAKI, Junichi AKITA
    Article type: Article
    Session ID: ME2014-39
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    The pixels in conventional image systems are placed at lattice positions, and this cause perceived jaggies at the edge in the slant lines. The authors have proposed and discussed the idea of arranging the active area in the pixel pseudorandomly in order to decrease the jaggies effect. In this paper, we describe and discuss the pixel structure and parameters in the pseudorandom pixel placement in terms of spatial frequency.
    Download PDF (599K)
  • Yoshiaki SHIMAMURA, Teruaki HIRANO
    Article type: Article
    Session ID: ME2014-40
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    A Rolling Representation method for face using Kinect is presented. The range information is obtained by using kinect, and the areas of people are extracted. To make distinctly vertical interval of the areas, the depth map of the areas is performed the liner transformation. In order to represent more detailed rolling state of the face, the three-dimensional expression based on the depth map is proposed.
    Download PDF (7770K)
  • Hiromi KATO, Teruaki HIRANO
    Article type: Article
    Session ID: ME2014-41
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    showing and super imposing a cooking recipe on a bookground by using the AR is presented. Various systems applied the AR are developed. When the cooking recipe is used, cooking work must be stopped to check the recipe of book or smartphone. In order to cook smoothly, this system using the AR is proposed.
    Download PDF (4060K)
  • Miwa KATAYAMA
    Article type: Article
    Session ID: ME2014-42
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Multi-camera systems can show us new image effects. We set up cameras around the target, and switch camera images in order in live right after the performance for comments of specialists. Multi-camera images are merged to generate another type of data like 3D models. Some of the images have been inserted in drama programs or sports live programs. Outline of image processing with multi-camera was summarized.
    Download PDF (154K)
  • Hiroki Meguro, Tadahiro Fujimoto
    Article type: Article
    Session ID: ME2014-43
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Background subtraction is one of the methods commonly used to extract a foreground object on a video image. However, most of the traditional background subtraction methods assume that a background has no changes, or only small changes in time. In this paper, we propose a foreground extraction method that is robust to background changes using multiple cameras. In our method, in order to extract a foreground object, multiple cameras are used to capture the changes of a background. The foreground object is distinguished from the background by finding corresponding pixels, each of which is on one of the cameras located at different positions, capturing the same background part using epipolar geometry.
    Download PDF (10364K)
  • Min GAO, Daisuke TAKAHASHI, Noriyoshi OKAMOTO
    Article type: Article
    Session ID: ME2014-44
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, people detection and tracking methods using the camera that can take the depth and color features is discussed. Recently, purposing of prevent the crimes, monitor's function were developed. It could also detect special person. But the overlap in a few of people is a problem. In this research, we detected person that has depth information, furthermore, by used the color information we purposed a more accurate method to detect an object person.
    Download PDF (4771K)
  • Ryosuke NAKAMURA, Izumi WATANABE, Toshiyuki GOTOH, Naoyoshi TAMURA, Hi ...
    Article type: Article
    Session ID: ME2014-45
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In an orchestra performance, performers interactively exchange information such as tempo, volume, and pitch under the conduct of the conductor. We have developed an automated performance system based on multiple agent simulation, which enable us to play music with plural real users and performer agent constructed in the system. In this paper, we first propose an extended MIDI format to support ornament symbols such as trill and turn symbols on a score, and a method for score following which supports the notes with various ornaments by referring the information of ornaments in the MIDI sequences. Then, we present the experimental results to evaluate the performance using the developed prototype system.
    Download PDF (8626K)
  • Tasuku ISHIDA, Naoki HASHIMOTO
    Article type: Article
    Session ID: ME2014-46
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Recently, mixed reality and augmented reality are often used in our daily life. These are techniques that we can easily experience the fusion of real and virtual objects. In order to achieve the seamless fusion photometric registration is one of the most important factor. Therefore, in this research, we propose a photometric registration method based on estimating surface reflectance of an unspecified object. We propose a simple method of modeling an unspecified object by using a depth and luminance value. By clustering of the depth and HSV values measured and using a simple reflection model, we approximate the surface reflectance of any objects. As a result, by measuring surface reflection of a familiar objects in advance, it can be easily used for photometric registration for AR.
    Download PDF (9229K)
  • Takahiro DAIMON, Tadahiro FUJIMOTO
    Article type: Article
    Session ID: ME2014-47
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    We propose a method to edit a target object in real-time in an free-viewpoint image generated by using a camera array. In our method, first, the target object and its background are extracted from the free-viewpoint image by giving focal planes for their 3D positions. Then, a new image is generated by applying interactive editing operations, such as rotation, translation, and scaling, to the target object and compositing the edited object onto the background.
    Download PDF (9700K)
  • Koji SUGIYAMA, Masahide KANEKO
    Article type: Article
    Session ID: ME2014-48
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Accurate detection of facial feature points is crucial to analyze facial features and to synthesize impressive caricatures by computer. A lot of studies on automatic detection of facial feature points have done until now, however, accurate detection methods which can be used in above-mentioned applications have not been developed yet. This paper firstly proposes automatic extraction methods of hair and eye contours that are robust for an influence by individual differences and shooting conditions. Then, an automatic extraction method of open mouth which could not be properly handled by previous methods is examined. Both open and close mouths are properly processed and even for expressive faces it's possible to detect contours of upper and lower lips. By applying the proposed methods to various face images, it's possible to synthesize facial caricatures which reflect the features of individual persons clearly.
    Download PDF (13961K)
  • Naoki SHIMAKAGE, Masahide KANEKO
    Article type: Article
    Session ID: ME2014-49
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    The purpose of this paper is to reflect the individual features of specific facial parts to 3-D caricatures. Features of specific facial parts are predefined by words and their relationships with principal components are analyzed. Here two types of words are considered; the first type of words represent basic geometric features such as large, high, thick, and so on, and the second type of words represent idiomatic expressions of shapes such as aquiline nose, button-shaped nose, goggle eyes, and so on. An arbitrary facial feature can be given to 3-D caricature by changing the weights of principal components for each facial part so that they reflect the shape represented by a given word.
    Download PDF (12084K)
  • Yuki NAKAMURA, Junichi AKITA
    Article type: Article
    Session ID: ME2014-50
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In order that humans feels they are seeing 'clearly' when they watch images, imaging systems have been developed. Especially, improved resolution of imaging systems is noticeable as observed for the rise of the "4K display" in recent years. High resolution displays, however, has problems such as increasing power consumption. The authors proposes the image systems with pseudorandom pixel placement, and in this paper, it is described that the subjective experiments for evaluation of the display system using pseudorandom pixel placement.
    Download PDF (4264K)
  • Momoko OYOKAWA, Masashi OKUDAIRA
    Article type: Article
    Session ID: ME2014-51
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this article, a determination method of distance and directions of textured pavement blocks (TPB) on a smartphone is proposes. Based on previously proposed TPB position determination method, longest parallel line pairs are determined from extracted edge pixels. Then, the number of pixels between the parallel lines is calculated with distance formula. Finally, the distance from camera position to the parallel lines is calculated comparing the number and a pixels-distance reference table created in advance. Experiments for 1, 3, 5m distance samples with an Android smartphone show that distance is determined with about 3-30% error rates.
    Download PDF (14495K)
  • Yuya AKAIKE, Shoji YAMAMOTO
    Article type: Article
    Session ID: ME2014-52
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Recently, a smart phone has ability to take beautiful photographs as well as digital camera. However, there are some problems in a smart phone camera that the lens flare is occurred by the direct light and unimpressive photograph is taken by ultimately miniaturized lens system. Therefore, we developed composite-type system that takes a beautiful photograph by using two or more smart phones which are linked each other. Based on the captured photographs, our system can introduce the best direction according to the distribution of environment light. Moreover, impressive photograph with perspective can generate by calculating the parallax between left and right side shots.
    Download PDF (14389K)
  • Hideaki HONDA, Akiyoshi ITODA, Shoji YAMAMOTO
    Article type: Article
    Session ID: ME2014-53
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Recently, the number of elder driver have increased according to the change in the population's age structure. The periodic assessment of driving technique for elder people is very important because their ability such as visual and cognitive acuity become increasingly difficult every year. Therefore, we developed easy-to-use inspection system for dynamic visual acuity by using personal computer and viewpoint detection. In this system, we employed an original target which changes special frequency and contrast in order to achieve the inspection at close distance. Moreover, an accurate evaluation was performed by analyzing the result of eye tracking during the inspection of dynamic visual acuity.
    Download PDF (5332K)
  • Tadashi MATUMOTO, Shoji YAMAMOTO
    Article type: Article
    Session ID: ME2014-54
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Recently, a smart phone has widely spread among businessman as handheld terminal. It is very useful to browse document and report everywhere with internet connection. However, the overview of these contents is difficult because the display size of smart phone is smaller than that of personal computer. Therefore, we propose the linked display system with multiple smart phones to improve the view of browsing. Our system uses special table which can detect the position of stationary article with CCD camera and infrared LED, and several handheld terminals on that table can display arbitrary images according to the setting position. By the various combination patterns of the handheld terminals, we can represent an overview and detail of document corresponding to user's request.
    Download PDF (14270K)
  • Sae NEMOTO, Yasuyuki SAITO
    Article type: Article
    Session ID: ME2014-55
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper shows to make music creation system that not uses the subjective emotions but psychological viewpoints. User intervenes in the system because each person has different impression.
    Download PDF (4152K)
  • Article type: Appendix
    Pages App1-
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (81K)
  • Article type: Appendix
    Pages App2-
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (81K)
  • Article type: Appendix
    Pages App3-
    Published: February 15, 2014
    Released on J-STAGE: September 22, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (81K)
feedback
Top