ITE Technical Report
Online ISSN : 2424-1970
Print ISSN : 1342-6893
ISSN-L : 1342-6893
33.17
Displaying 1-28 of 28 articles from this issue
  • Article type: Cover
    Pages Cover1-
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (15K)
  • Article type: Index
    Pages Toc1-
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (91K)
  • Article type: Bibliography
    Pages Misc1-
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (25K)
  • Michihiko Tomikawa, Masaomi Oda
    Article type: Article
    Session ID: HI2009-68/3DIT2009-1
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    We would assume that people look a moving target as a living thing. Such estimation is justified from the result of a study of mental attribution, which showed a moving target to be considered to have a heart, and a study of image schema, which indicated the elicitation of emotion from the meaning of verbal expression. In this paper, we examined by a psychological experiment whether people could have some common emotion in a moving target. The target was a small circle and acted in 1 simple ways such as uprising, reciprocation, circular motion, etc. As a result, the participants could feel a common emotion according to each moving way of the target. This result will be useful to the realization of the emotional interface.
    Download PDF (547K)
  • Rie HAYASHIHARA, Masaomi ODA
    Article type: Article
    Session ID: HI2009-69/3DIT2009-1
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    We investigated how complexities of chord progression influenced the pleasingness, liking, and feelings. Six music stimuli, which had 3 complexities of major and minor code progressions, were presented 12 times and asked participants to rate the 11 kinds of measures. As for pleasingness, the rated value of the low complexity condition was high after the first trial, but after the last trial the value in the high complexity increased with the same level of the low condition. However, the effects varied for the other feelings. For examples, there were no effect for sadness or limited effect only to the Major code for happiness.
    Download PDF (695K)
  • Takuya TANI, Kouji HASEGAWA, Hiroyasu SAKAMOTO, Toshio SAKATA, Hiroshi ...
    Article type: Article
    Session ID: HI2009-70/3DIT2009-1
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper proposes a measuring method of facial expression and emotion by using CCA (canonical correlation analysis). The CCA analyzes correlation between two kinds of quantities of facial expression images and subjectively evaluated values of valence and arousal. We introduce gazing property of humans for observing facial expression images into the proposed method and inspect its efficiency. We also propose a method for performing CCA of singular matrices by using intermediate variables. We also employ a kernel CCA method with Gaussian kernel functions and verify its effectiveness by comparing it with linear CCA.
    Download PDF (819K)
  • Shogo TOKAI, Kenji MASE, Tetsuya KAWAMOTO, Toshiaki Fujii
    Article type: Article
    Session ID: HI2009-71/3DIT2009-2
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this report, we explain a method to visualize the dynamic scene situation using Peg-Scope Vision system from multiple videos. To improve the temporal flexiblity of the system, we use high-speed camera and develope a method to deal temporal features of the final video contents construction. And, we use this to actual skill scene experimentally, and we will discuss about effectiveness of our method.
    Download PDF (1105K)
  • Minoru YOKONO, Masahiro SUZUKI, Kazutake UEHIRA
    Article type: Article
    Session ID: HI2009-72/3DIT2009-2
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    To treat the virtual object with the observer's body in mixed/augmented reality as if to treat the real object, we propose to estimate the location where the observer is visually perceiving the virtual object by using the observer's action, because a haptic stimulus must be given there. To evaluate our proposal, we investigate features of the observer's action when the observer reaches out for the virtual object, and show that changes of apparent depth of the virtual object affect the observer's action. These facts suggest that a state, characteristic, and mechanism of the changes of the apparent depth of the virtual object must be revealed to achieve the estimation.
    Download PDF (633K)
  • Ryousuke Sano, Motomasa Tomida, Kiyosi Hoshino
    Article type: Article
    Session ID: HI2009-73/3DIT2009-2
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we constructed 3D hand posture estimation system which searches the most similar image at each time, not by narrowing the search space by the past results from the database, but using two-stage searches: the first one is coarse screening using relative positions of fingernails, and the other is fine calculation of the similarity using low-order image features. The estimated results were outputted as the finger joint angles. The experimental results showed that the system estimated the human hand posture with 18.31 degrees as the standard deviation of estimation errors in all the finger joint angles, which suggests the efficiency of the proposed system.
    Download PDF (801K)
  • Hideaki KOBAYASHI, Kikuo ASAI
    Article type: Article
    Session ID: HI2009-74/3DIT2009-2
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    The purpose of this paper was to investigate how the differences of the stereoscopic devices and the depth presentation position influence the size perception. At first, using the three types of stereoscopic devices, we examined the sizes at the three points of depth presentation position with psychophysical measure. Then, we analyzed the relationship between the three points of depth presentation position and the size perception with point of subjective equality. As a result, it was clarified that size perception has a large margin of error in increasing visual distance. And it was also shown that the presentation condition of physical objects influences the size perception.
    Download PDF (688K)
  • Satohiro TAJIMA, Masato OKADA
    Article type: Article
    Session ID: HI2009-75/3DIT2009-2
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    The power law in the amplitude spectra of natural scenes provides not just an efficient description of them but also a foundation for image processing. Psychophysical studies show that the the forms of the amplitude spectra are clearly related to the human visual performances. However, the underlying neuronal mechanism and computation that account for the perception of the natural image statistics is poorly known. We propose a theoretical framework for neuronal encoding and decoding of the natural image statistics, hypothesizing the elicited population activities of spatial-frequency selective neurons observed in early visual cortex. The predictions by the computational model are consistent to the experimental data reported in the previous study. Especially, the qualitative disparities between performances in fovea and parafovea can be explained based on the distributional difference over preferred frequencies of neurons. The model predicts that the frequency-tuned neurons have asymmetric tuning curves for the amplitude spectrum slopes.
    Download PDF (715K)
  • Yoshitaka HIROSE, Ko SAKAI
    Article type: Article
    Session ID: HI2009-76/3DIT2009-2
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    We discuss the information representation in V1 neurons including their surround modulation, with specific interests on sparseness in their coding. We compared statistical indices, kurtosis and KL divergence, between the surround-modulation model and a classical receipt field (CRF) model to evaluate the sparseness of coefficients represented in the models. The results showed a low kurtosis in the distribution of coefficient with the surround-modulation model, but a low KL distance between the models indicated their distributions fairly closed to each other. Next, we modified the nonlinear function that controls sparseness in the cost function with the aim of improving the kurtosis. The modification did not improve the distribution. These results suggest that surround modulation would decrease sparseness of the information representation in V1 cells, or that representation might not be characterized solely by sparseness.
    Download PDF (661K)
  • Megumi OKI, Nobuhiko WAGATSUMA, Ko SAKAI
    Article type: Article
    Session ID: HI2009-77/3DIT2009-2
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Figure-ground segregation and the determination of the figure direction are the necessary process for the object recognition. We investigated the effect of the feature-based attention for the perception of figure direction through the psychophysical experiment with ambiguous figures which consist of random dot pattern. Subjects were asked to report the figure direction after attending to a direction of motion. They showed the tendency that a region segregated by the attended motion direction is more frequently perceived as figure. This result indicates that feature-based attention plays a critical role to modulate the human perception of figure direction.
    Download PDF (882K)
  • Aya MITSUOKA, Yasuhiro HATORI, Ko SAKAI
    Article type: Article
    Session ID: HI2009-78/3DIT2009-2
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Medial-axis representation of object shape in V1 and the synchronous firing of neurons attract attention as mechanisms of shape perception. Although physiological studies on synchronization have been advanced, the direct evidence of synchronization on shape perception has not been clarified. In the present study, we investigated psychophysically the effect of synchronization on shape perception with stimuli consisting of contours that flicker at distinct synchronous rate. Our results showed that the perception of figure direction is facilitated for an object with higher synchronization rate. The magnitude of modulation in the perception of figure direction was independent of stimuli shape. The results suggest that the synchronization of stimulus contour is crucial for shape perception.
    Download PDF (1471K)
  • Tsubasa MAEDA, Masayuki KIKUCHI
    Article type: Article
    Session ID: HI2009-79/3DIT2009-2
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This study investigated the influence of the range of spatial attention on the position of amodal contour by psychophysical experiment. We used three types of occluded patterns: square, square-like pattern whose vertices are rounded inward of the square, and square-like pattern whose vertices are protruded outward of the square. One of the four vertices was occluded by a disk. We measured the position of amodal contour using a probe whose position was controlled by staircase method. At the same time, subjects were asked to answer whether three letters presented at the same distance from probe were identical or not, in order to control the range of spatial attention. We obtained the results indicating that the shapes of amodal contours were affected by spatial range of attention.
    Download PDF (1744K)
  • Takuro MANO, Satoshi SHIOIRI, Kazumichi MATSUMIYA, Ichiro KURIKI
    Article type: Article
    Session ID: HI2009-80/3DIT2009-2
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    When some of the particular set of layouts are repeatedly presented in a visual search task, participants learn the display layouts implicitly. The target-search time becomes shorter as the number of repetition increases although they do not notice the repetitions (contextual cueing effect: Chun and Jiang, Cognit Psychol, 36(1), 28-71, 1998). In this study, we investigated the effect of explicit knowledge or awareness of the repetitions. We compared the amount of contextual cueing effect (shortening of reaction time) and eye eye-movement characteristics between layouts that observes memorized explicitly and those observers memorized implicitly. The results show that fixation duration for the layout that observers memorized explicitly is lengthened. These results suggest that there are different mechanisms for explicit process and implicit process of visual memory.
    Download PDF (1093K)
  • Takayuki Itoh, Takuro Mano, Kazumichi Matsumiya, Ichiro Kuriki, Satosh ...
    Article type: Article
    Session ID: HI2009-81/3DIT2009-3
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    The human visual system processes retinal input images to extract information that is useful for living. The eye movement control is one of the functions of the information extracting system and there should be close relationship between positions of eye fixation and image features. In this study, we measured observer's eye movements for free viewing natural images to investigate the relationship. Fixation duration varied between shorter than 100ms and longer than 1s. We investigated how fixation duration is related to the image features with a feature map analysis.
    Download PDF (610K)
  • Yasumasa Ogata, Keiji Uchikawa
    Article type: Article
    Session ID: HI2009-82/3DIT2009-3
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    The next gaze is determined by visual information in the peripheral field. A saliency map model proposed in the previous studies give an assumption to induce eye movements, but it doesn't predict the order of successive gazes. In our study we aim to clear the visual stimulus conditions in which our gazes are induced. A target with a certain luminance contrast was presented among distractors with a fixed luminance contrast. We measured the threshold of inducing early eye movements, and also measured the detection threshold. As a result the threshold of target detection was 61% contrast. However the threshold for inducing early eye movements was not determined.
    Download PDF (905K)
  • Haruka TAKENAKA, Takeshi KOHAMA, Hisashi YOSHIDA, Naohiro TODA
    Article type: Article
    Session ID: HI2009-83/3DIT2009-3
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Resent studies have demonstrated the important roles of involuntary fixation eye movements in visual perception. While microsaccades and drift eye movements maintain fixation, they are modulated by the visual attention system, and emphasize high spatial-frequency components of the visual information. In this study, employing a parametrical approach involving the use of a statistical autoregressive model, we analyzed the characteristics of frequency-domain properties of the drift eye movements before and after the occurrence of microsaccades. The results indicated that the low frequency components of the drift eye movements were reinforced immediately after the microsaccades. This tendency was more prominent when visual attention was dispersed over the parafoveal visual field. These results suggest that microsaccades and drift eye movements are controlled by higher order brain functions so as to acquire details of the visual information from the peripheral vision.
    Download PDF (838K)
  • Kou TAKAHASHI, Masayuki KIKUCHI
    Article type: Article
    Session ID: HI2009-84/3DIT2009-3
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    When we observe moving objects behind some occluders and only motions of local parts are visible, the entire motion cannnot be perceived. This is called "aperture problem". This problem can be solved by integrating two or more local information (motion integration). Previous studies about motion integration usually used 2D stimuli. Only a few studies used 3D stimuli. Among then, there was the one which investigated the relation between collinear arrengement of moving lines in depth dimension and motion integration, and the relation between figure-ground separation and motion integration, using dynamic random dot stereogram. This study performded psychophysical experiments to clarify the relation between figure-ground separation and motion integration in detail using 2D stimuli expressing concave/convex edges based on the charecteristics of spatiotemporal frequency of 2D pictures for the figure-ground separation, and 3D stimuli by dynamic randomdot stereogram. We obtained the results that when motions of concave edges were perceived, subjects could easily integrate them, on the other hand, when convex edges were presented, subjects could not integrate them well. This result suggests that there is a relation between figure-ground separation and motion integration.
    Download PDF (1355K)
  • Tomoya KITANO, Masayuki KIKUCHI
    Article type: Article
    Session ID: HI2009-85/3DIT2009-3
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    There are various designs (figure) in the world, and how those figures are effectively perceived by obserbers is determined by Gestalt factors defined by Gestalt psychology (Wertheimer, 1923). Though it has already been clarified that the factor of closure works strongly, compared to other factors coexisted in the figure. There are few studies investigating the strength of the factors of similarity and so on, in addition to the factores of proximity and closure. There is no previous work that examines the perception when Gestalt factors are combined under binocular stereopsis (perception of depth based on the gap between the right and left eye's retinal images). In the present study, following two points were examined. The first experiments investigated whether there are any differences in perceptual distances towards horizontal and vertival axes in frontoparallel plane, and toward depth axis. The second experiments addressed the difference of perception using stimuli with and without the factor of absence of reminder. In addition, we investigated 3D nature of some Gestalt factors. As a result of the first experiment, we obtained the conclusion that perceived distances towards X, Y, and Z axes were almost the same. The second experiments resulted in that there were not remarkable difference with and without the factor of absence of reminder, and that when 3D convexity/concavity was given, grouping power was strengthen.
    Download PDF (1070K)
  • Kotaro HASHIMOTO, Kazumichi MATSUMIYA, Ichiro KURIKI, Satoshi SHIOIRI
    Article type: Article
    Session ID: HI2009-86/3DIT2009-3
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    We can identify an objects independently of the viewpoints. Studies of 3D shape constancy have shown the visual system has object recognition mechanism that has relatively narrow tuning to viewpoints. There is a phenomenon called a viewpoint aftereffect that is related to such a viewpoint-variant process (Fang, F. & He, S. Neuron, 45, 793-800, 2005). The viewpoint aftereffect is a phenomenon where the direction of a face appears to be shifted after observing the same face in a angle. We investigated the viewpoint aftereffect with several conditions and found the results that suggest that the effect is indeed related to object perception.
    Download PDF (727K)
  • Yukako MURAKAMI, Ko SAKAI
    Article type: Article
    Session ID: HI2009-87/3DIT2009-3
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    We studied psychophysically shape-from-highlight with viewpoint light-source. Specifically, we examined the accuracy and the confidence of depth perception in a viewpoint light-source and other light-source conditions. The result showed highlight increases confidence, but not accuracy. This result indicates that highlight doesn't facilitate correct perception of 3-D shape but rather help to perceive unique shape. This suggests that a crucial effect of highlight is to enhance 3-D impression.
    Download PDF (688K)
  • Naruhiko FUKINO, Keiji UCHIKAWA
    Article type: Article
    Session ID: HI2009-88/3DIT2009-3
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    It was suggested that multiple chromatic channels existed in the higher level of our color vision. However their precise features are not yet clear. Our aim is to investigate the multiple chromatic channels by using dichoptic masking. A target stimulus was presented to one eye and a mask stimulus to the other eye. The target was Gabor modulated along the r/g axis. The mask consisted of random dots modulated varied color directions. Results of one observer suggest that the cardinal channels and another channel might exist. But, results of the other observer do not suggest that color channels exist.
    Download PDF (475K)
  • Satoshi SHIOIRI
    Article type: Article
    Session ID: HI2009-89/3DIT2009-3
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    There are two types of binocular cues available for perception of motion in depth. One is the binocular disparity change in time (DCT) and the other is the velocity difference between the left and right retinal images (interocular velocity differences, IOVD). An important question is whether the two cues are actually used in the human visual system and whether they have different roles in perception of motion in depth. The manuscript provides some evidence that the visual system has different temporal frequency properties for the velocity (IOVD) and disparity (DCT) cues for motion in depth, which differences suggest that the two cues may have different roles.
    Download PDF (787K)
  • Article type: Appendix
    Pages App1-
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (82K)
  • Article type: Appendix
    Pages App2-
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (82K)
  • Article type: Appendix
    Pages App3-
    Published: March 18, 2009
    Released on J-STAGE: September 20, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (82K)
feedback
Top