ITE Technical Report
Online ISSN : 2424-1970
Print ISSN : 1342-6893
ISSN-L : 1342-6893
41.04 Media Engineering(ME)
Displaying 1-27 of 27 articles from this issue
  • Pages Cover1-
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (110K)
  • Pages Cover2-
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (150K)
  • Takeshi OKAMOTO, Akihiro MATSUFUJI, Shoji YAMAMOTO
    Session ID: ME2017-1
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    Recently, the number of dementia patient have increased according to the long life of elderly people. Early detection of Alzheimer's dementia becomes important to respond to such changes in the times. In general, it is well-known that the patient of Alzheimer’s dementia tends to loss their memory. However it was difficult to make a quantitative evaluation of the memory ability up to now. Therefore, we combine eye tracking with computer graphics and a method for visual serial position effect. This serial position effects is useful method to measure the capacity of short-term memory in the psychological physical field. Our proposed method can select arbitrary position in serial position by using the result of eye tracking. In this paper, we evaluated the reaction time and tendency of short-term memory as the basic study of visual serial position effect. As a preliminary result, the difference of short-time memory ability was indicated between 20’s and 50’s.
    Download PDF (599K)
  • Akihiro MATSUFUJI, Takeshi OKAMOTO, Shoji YAMAMOTO
    Session ID: ME2017-2
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    Recently, human tracking method using cameras and image processing has been developed as the monitoring system. These methods are possible to detect human position and behavior with much circumstance. However, pattern recognition with computer vision is always accompanied with the difficulty process and condition such as occlusion problem caused by complexity indoor environments and various change of human feature caused by perspective characteristic. Therefore, we propose a robust tracking method with stereo omni-directional cameras in an indoor scene. Our approach uses 3D cylindrical model based on the feature of human profile in the 2D image. Moreover, we employ useful constraint by common foot position in the stereo images to eliminate the tracking error. These ideas give us the robust recognition for estimation of human position, even if the captured human feature has some deficits by the obstruction.
    Download PDF (677K)
  • Yuka NAKAMURA, Naoki HASHIMOTO
    Session ID: ME2017-3
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    Recently, there are many space directions by using projection mapping to interior space. Especially for stage directions, it can improve stage effects and need only simple arrangements for stage settings. However, in order to show highly immersive theatrical performance, image projection with high accuracy for wide area like a theater is strongly required according to both the scenario and a shape of the space. Therefore, in this research, we propose an effective geometric correction combining both a fish-eye lens camera and a normal lens camera. Only with a simple procedure and the commodity cameras, we can provide seamless multi-projection for the whole of the theater.
    Download PDF (884K)
  • Hiroshi TADA, Koichi ICHIGE
    Session ID: ME2017-4
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we propose a highly accurate feature point tracking method for omnidirectional image sequence. It is often difficult to extract and track feature points because of nonlinear distortion of omnidirectional images. We have already proposed a method of tracking feature points after transforming the vicinity of a feature point into perspective projection, but sometimes tracking performance was not accurate. In the proposed method, feature points are extracted from each face of regular icosahedron on which omnidirectional image is projected. Furthermore we introduce ASIFT into the patch, which enables robust tracking of deformation. Performance of the proposed method is evaluated through experiments.
    Download PDF (843K)
  • Tetsu SUZUKI, Koichi ICHIGE
    Session ID: ME2017-5
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we propose a method to reduce computation time in Background Subtraction from Compressive Measurements (BSCM) which is one of the framework of background extraction method in fixed viewpoint video. The conventional BSCM method realized highly accurate background extraction using spatio-temporal correlation in movie tensor. However, the convergence to the optimal solution using the iterative calculation is slow and the total calculation time is long. In the proposed method, we reduce the calculation time while maintaining the reconstruction accuracy by making the update amount of the penalty parameter variable in optimization.
    Download PDF (889K)
  • Yuki HAYAKAWA, Hotaka TAKIZAWA, Hiroyuki KUDO, Toshiyuki OKADA
    Session ID: ME2017-6
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we propose a method of non-rigid registration of abdominal 3D CT images. We newly introduce a lattice spring model, which can obtain a more natural deformation field than our previous method based on local affine model. The previous and new methods are applied to 2-D artificial images and 3-D actual CT images, and experimental results are shown.
    Download PDF (1245K)
  • Honoka FUJII, Yasuyuki SAITO, Shigeki SAGAYAMA
    Session ID: ME2017-7
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    The paper shows Semi-automatic music piece creation by extracting impression words from object and background in color image. The picture and the music have a strong influence on emotion of the person. When people watch an image such as a picture or a photograph, it is considered that they can feel deeper the impression of the image if a music which matches the image impression is played. In this study, a system user separates the image into object and background, and each color information is converted into “impression word”. The system creates music pieces semi-automatically.
    Download PDF (1040K)
  • Haruka JIBIKI, Yasuyuki SAITO, Eita NAKAMURA, Shigeki SAGAYAMA
    Session ID: ME2017-8
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    This study examined a method for discriminating players’ page turning cues against head shaking motions for the purpose of developing an automatic page turning system for music scores. Nodding is commonly used as a page turning cue in actual piano playing and has been used in our proposed system. Since a similar motion is made when pianists shake head in time to the music, it is necessary to discriminate page turning cues from this head shaking motion. Through the experimental results, we noticed that displacement of player’s nose position in head shaking motion is shallower than that in page turning cue. Thus we added thresholding processing by using a slider on the software, and used gaze analysis together. It responded only when player gazed in the lower region of the musical score.
    Download PDF (949K)
  • Sho NAKADAIRA, Asako SOGA
    Session ID: ME2017-9
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    The purpose of this study is to verify the effect of raising the opposite side of the dominant arm of windmill pitching in softball. We measured the windmill pitching by using an optical motion-capture system. The speed of the dominant wrist, twisting of the wrist, speed of leg drawing, and the rotation angle and speed of the waist were calculated as physical feature values. The feature values of normal pitching are compared with those of pitching without raising the opposite side of the dominant arm.
    Download PDF (653K)
  • Yuho YAZAKI, Asako SOGA, Bin UMINO, Motoko HIRAYAMA
    Session ID: ME2017-10
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    The purpose of this study is to support the creation of contemporary dance choreography using 3D motion data acquired by motion capture. In this study, we have developed a system, “Body-part Motion Synthesis System (BMSS)” to automatically synthesize short choreography motions by using 3D motion data and simulates these motions in 3DCG. To evaluate the effectiveness of this system for students or professional choreographers, experiments were conducted with twenty five students who are studying dance in Japan, USA, and United Kingdom, or four professional choreographers. From the results of the experiment, the effectiveness of this system for supporting dance creation was confirmed, and we examined scenes and improvement points that the current system can work effectively.
    Download PDF (755K)
  • Zhang ZiLang, Suma Noji, Satoshi Sudo
    Session ID: ME2017-11
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    As a place where the mind becomes quiet in everyday life, we provide an environment for virtual experiences of Japanese tea rooms.
    Download PDF (633K)
  • Daiki MIYAZAKI, Naoki HASHIMOTO
    Session ID: ME2017-12
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    Recently, spatial augmented reality (SAR) has been used as a method of editing appearance of real objects by using projectors. The targets of SAR are not only static rigid objects but also dynamic rigid objects, and being non-rigid objects. This research proposes a projection mapping system for dynamic 3D non-rigid objects with a low-cost depth sensor. To measure the shape of the target, we use dot markers arranged according to the special pattern, and recognize the local pattern and track each dot marker. For projecting on the whole target, we also apply a surface sampling and a shape measurement using the edges.
    Download PDF (929K)
  • Arisa SATO, Tomoharu ISHIKAWA, Nobuhisa HANAMITSU, Kouta MINAMIZAWA, H ...
    Session ID: ME2017-13
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    Product images at the fabric selling site cannot fully express the texture when purchasing fabrics. In previous studies, a method of expressing vibrations together with fabric images was effective for texture representation of fabrics. In this research, we examine the images and vibrations that coincide with the roughness - smoothness evaluation of cloth, and clarify the method of presenting cloth image and vibration.
    Download PDF (1011K)
  • Junki TSUNETOU, Tomoharu ISHIKAWA, Mutsumi YANAKA, Yoshiko YANAGIDA, K ...
    Session ID: ME2017-14
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    The purpose of this research is to clarify how clothing knowledge, experiences and differences in how to touch fabrics affect textile texture evaluation. We prepared 20 each group of subjects with different clothing knowledge and experience, and conducted a sensitivity evaluation experiment with two kinds of touching methods (tactile, visual tactile) for 39 kinds of fabrics with different materials and weave structures. We examined differences in clothing knowledge, experience and how to touch cloth by factor analysis of these results and placement in fabric texture evaluation space.
    Download PDF (663K)
  • Shuntaro FUJITA, Tomoharu ISHIKAWA, Yoshiko YANAGIDA, Kazuya SASAKI, K ...
    Session ID: ME2017-15
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    In online shopping, it is regarded as a problem that the texture of the real fabric is hard to be transmitted. Therefore, quantitative change of gather content was examined and optimum gather amount was examined based on the previous study that cloth texture is easy to be transmitted by attaching drape to fabric. We conducted three kinds of experiments while considering sales form and subject group. (Experiment is to evaluate fabrics only with visual information, and visual and tactile evaluation of clothing / fabric)We also classified subjects into clothing subjects and engineering subjects based on the presence or absence of knowledge about clothing and examined how the differences in subject groups affect the texture evaluation of the fabric.
    Download PDF (893K)
  • Ibuki SHIBATA, Tomoaki NAKAMURA, Masahide KANEKO
    Session ID: ME2017-16
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    Unlike humans and ground robots, a drone moves not only horizontally but also vertically. Therefore, it is inappropriate to use existing 2D personal space. This paper investigated 3D personal space using a small drone in actual flight conditions. The drone flew toward a standing examinee from 15m away and the distance that one felt uncomfortable was measured. As a result, personal space of an altitude of 1m to 2.5m in the front, oblique front and lateral of the examinee was obtained. In the case of the front, the personal space became larger as the drone flew faster, and it disappeared at an altitude of 4.5m to 5.5m.
    Download PDF (1074K)
  • Kentaro HIRABAYASHI, Tomoaki NAKAMURA, Masahide KANEKO
    Session ID: ME2017-17
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    A caricature is a medium expressing the facial features of a person briefly, and since it is used in various scenes, studies on the automatic generation of caricatures have been carried out. Among them, automatic extraction methods of facial feature points have been studied. Since the extraction precision is not necessarily stabilized and necessary information has not been completely acquired, the quality of automatically generated caricatures has not reached the level of caricatures by hand drawing yet. This paper proposes an automatic extraction method of mouth region that is robust to shooting conditions, and make the extraction accuracy stable. In addition, we aim to acquire information of the inside of mouth automatically that could not be obtained by conventional methods to improve the expressiveness of caricatures. The proposed method can reduce the influence of shooting conditions in mouth area extraction and makes the robust automatic extraction possible. The extraction accuracy is improved and it becomes possible to automatically acquire intraoral information, which improves the expression of caricatures.
    Download PDF (760K)
  • Erika Wada, Sho Kato, Mie Sato
    Session ID: ME2017-18
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    A user walks wearing a head mounted display with a stereo camera. While walking, the user views images taken with the stereo camera that is set at his/her eye positions. This study examines influences on a sense of distance, a sense of discomfort and a sense of fatigue while walking by changing viewing angles on the head mounted display.
    Download PDF (850K)
  • Yuta KUME, Mie SATO
    Session ID: ME2017-19
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    Recently, for developments of head mounted displays and motion controllers, it is possible to superimpose user’s hands in a virtual space and studies of barehanded interaction with a virtual object has attracted attention. This study develops a system that allows a user to interact with a virtual object with his/her hands using a motion controller and a head mounted display. In this paper, we examine whether it is possible to reduce a visual uncomfortable feeling and to improve operability by using a hand model than has the same size and color as the user’s hand.
    Download PDF (809K)
  • Haruna Kimura, Sho Kato, Mie Sato
    Session ID: ME2017-20
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    Recently, interactions with a virtual object on a head mounted display (HMD) are focused with developments of HMDs. Our previous study aimed at natural interaction with virtual objects and developed an AR system that realizes grasping a virtual object with his/her bare hands. However, it was not easy for a user to perceive a feeling of grasping a virtual object with his/her bare hands. In this study, we try to improve the perception by adding auditory cues when a user grasps a virtual object of various shapes, with his/her bare hands.
    Download PDF (780K)
  • Sato Ishigaki, Sho Kato, Hiroshi Hasegawa, Mie Sato
    Session ID: ME2017-21
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    Interaction between a user and a virtual object is one of subjects on augmented reality(AR). Our previous study developed an AR system with which a user could grasp a virtual object with his/her bare hand and examined whether pseudo-haptics from visual stimuli could be perceived by the user. However, pseudo-haptics were not perceived in all cases Therefore, in this study, we propose an AR system that uses combination effects of visual and auditory and examine which auditory stimuli effectively provide a user with pseudo-haptics.
    Download PDF (773K)
  • Yusuke ZAITSU, Koichi ICHIGE
    Session ID: ME2017-22
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we propose an accurate speech enhancement method using Compressive Sensing with Variable Sparse Level. Compressive sensing (CS)-based methods can efficiently enhance speech signal quality but they often require a noise level in advance of speech enhancement processing. We try to estimate the noise level from a given input signal, and introduce some threshold levels for input amplitudes so that speech components are effectively enhanced. Performance of the proposed method is evaluated through computer simulation.
    Download PDF (1108K)
  • Jonghoon IM, Hiromitsu FUJII, Atsushi YAMASHITA, Hajime ASAMA
    Session ID: ME2017-23
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, We propose a method of concrete crack detection using visual sensor and sound sensor. Firstly, three-dimensional measurement of the concrete surface is performed by using the light-section method. The obtained point cloud data is analyzed and the positions of the cracks are identified. Next, the positions to hit with a hammer are decided based on the crack position information. Finally, we actually hit the concrete with the hammer and analyze the acoustic signal to detect the direction of the crack.
    Download PDF (558K)
  • Kazuhito MURAKAMI
    Session ID: ME2017-24
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this talk, typical or unique methods to measure biometric data by thermo-vision image sequences are introduced. For examples, glasses extraction, contact lens extraction, respiration measuring, heart rate measuring, and personal identification methods are explained.
    Download PDF (428K)
  • Pages 91-
    Published: 2017
    Released on J-STAGE: April 14, 2021
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (130K)
feedback
Top