ITE Technical Report
Online ISSN : 2424-1970
Print ISSN : 1342-6893
ISSN-L : 1342-6893
38.10
Displaying 1-21 of 21 articles from this issue
  • Article type: Cover
    Pages Cover1-
    Published: February 25, 2014
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (12K)
  • Article type: Index
    Pages Toc1-
    Published: February 25, 2014
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (65K)
  • Daisuke YOSHIDA, Yuichi NONAKA, Takeru KISANUKI
    Article type: Article
    Session ID: HI2014-28/3DIT2014-1
    Published: February 25, 2014
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper introduces an algorithm and its implementation of the image visibility enhancement technology considering visual characteristics of human eye. The important things in visibility enhancement are to maintain information in original signal from image sensor, and to reflect the information into the processed image in a visible way for human beings. The proposed algorithm adopts tone redistribution to the input image depending on local luminance distribution, followed by contrast correction based on Retinex theory, and histogram equalization. The proposed algorithm enables obtaining clear picture even in hard shooting condition such as backlight and spotlight. Also, it was confirmed that the proposed algorithm is applicable for handling movies in real-time.
    Download PDF (5442K)
  • Yushin Kakei, Shun'ichi Tano, Tomonori Hashiyama, Junko Ichino, M ...
    Article type: Article
    Session ID: HI2014-29/3DIT2014-2
    Published: February 25, 2014
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Giving a feel to a virtual object is very effective, in that it can be improved in operability and reality. Recently, giving a sense of touch by using a visual illusion has been attracting attention, as a way to be given a sense of touch without going through the equipment. However, it has not been quantified about the relationship of tactile and visual, and there is room for study in its augmentability. In this study, we investigate whether the vision to contribute how much to the tactile. In addition, we propose three methods for providing a new finger tactile sense only by augmentation of visual information.
    Download PDF (8204K)
  • Hiroki Taguchi, Hisataka Suzuki, Kosaku Ogawa, Akihiko Shirai
    Article type: Article
    Session ID: HI2014-30/3DIT2014-3
    Published: February 25, 2014
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This article proposes a motion recognition method for Oculus Rift, which is currently the most popular head mounted display available, by using neck motion as non-verbal interaction. We use sensor-fusion, which is already equipped on the Oculus Rift, to detect user's neck motion. The method can define three gestures, namely, "agree," "disagree," and "make a question," by determining the constancy of the head turning angle.
    Download PDF (3587K)
  • Makoto Tanabe, Yoshihiro Sejima, Masayuki Yamamoto, Atsushi Osa
    Article type: Article
    Session ID: HI2014-31/3DIT2014-4
    Published: February 25, 2014
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    A previous study reported that the lecturer's avatar of a remote education communication support system with a group gaze model which, the avatar gazed the audience at a rate of 13% right, 60% center, 27% left in the virtual classroom, could provide effectiveness for group interaction and communication, especially sense of unity. We believe that a reason of the result is explained by an experimental result, lecturer in the real class room gazes longer duration to the left side than to the right side. In this study, we investigated relationship between duration time of gazing and experimental tasks assigned to the participants. Results show that gazing time depends on the experimental tasks, and two tasks brought left and right deviation. Furthermore, we discussed the reason that the group gaze model can improve the sense of unity in the virtual class room.
    Download PDF (4635K)
  • Michimi INOUE, Takumi SOTOME, Mie SATO, Miyoshi AYAMA, Naoki HASHIMOTO
    Article type: Article
    Session ID: HI2014-32/3DIT2014-5
    Published: February 25, 2014
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we examined a gradation that give a good impression based on impression assessments, and then we researched characteristics of the gradation focused on luminance-difference. Results showed that impression image improved with gradation increase. However, the tendencies are significant between 32 to 128 gradations, and then 128 gradations had luminance-difference of more than the JND. We found that a necessary gradation is 128 when a response curve of display device follows the JND. On the other hand, when that follows the gamma curve of 2.2, necessary gradation is 256.
    Download PDF (7020K)
  • Kazutoshi FUJIKAWA, Atsushi OSA
    Article type: Article
    Session ID: HI2014-33/3DIT2014-6
    Published: February 25, 2014
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    L* value represents a blackness, and it has been an indicator of the high-grade sensation in black fabric for formal wear. However, there must be other factors that influence the sensation. In this study, we investigated the relationship between apparent high-grade sensation and glossing property of the fabric. We measured 2-D brightness distribution of fabric with a drape, and discovered an important grossing property connecting with the sensation by using an image processing, and principal component analysis. Results show that a combination of the L* value and the important glossing properties represents the sensation. It may be possible to quantify the high-grade sensation in black fabric.
    Download PDF (4623K)
  • Mayu KAKEGAWA, Tetsuya NATORI, Masayuki KIKUCHI, Yuko MASAKURA, Tomoki ...
    Article type: Article
    Session ID: HI2014-34/3DIT2014-7
    Published: February 25, 2014
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This study attempted to examined of brain activity during listening to music and expressing impression of them by drawing from various viewpoints. We analyzed brain activity measured from 16ch NIRS during listening to music and drawing. We were used Fisher's linear discriminant method in the analysis. As a result, we conclude that as for the subjects who has been experienced Kasei training, channel 4 conches the listening to music and drawing. After this, we should employ more subjects for our experiment.
    Download PDF (559K)
  • Ryo NAGAYAMA, Masayuki KIKUCHI
    Article type: Article
    Session ID: HI2014-35/3DIT2014-8
    Published: February 25, 2014
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Currently, it has been attracting attention for the technology of Brain-Computer Interface which reads the human thought by connecting brain to computer. Previous studies have such problems that subjects had to perform experiments in huge equipment, or to spend much time for attaching a lot of electrodes, therefore subjects had to be imposed burden. In order to improve those problems, Maruyama and Kikuchi attempted to construct simplified BCI with 2ch NIRS using two kinds of stimuli, and classified them. This study added two categories of stimuli for the study by Maruyama and Kikuchi resulting four categories, and investigated whether the classification of four categories is possible or not from the data obtained from 2ch NIRS. In our experiment, four directions of arrows were presented and the data obtained from subsequent brain activity were classified by random forest to obtain the classification rate. As a result, the classification rate over 80% on average was obtained. The result suggested that it is possible to identify the for classes even 2ch NIRS data.
    Download PDF (656K)
  • Noriaki FUJISHIMA, Kiyoshi HOSHINO
    Article type: Article
    Session ID: HI2014-36/3DIT2014-9
    Published: February 25, 2014
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this study, we propose a new fingernail area extraction method using characteristics of color. First, a value of color difference is calculated between color of an attention pixel and an average color of local area around it. Then, the high value pixels are removed. After that, some areas are extracted by setting a threshold. We investigated relationships between wrist rotation angles and success probability of nail detections. As a result, we confirmed our proposed method was effective when the palm side area was taken.
    Download PDF (8720K)
  • Takahiro YOSHIOKA, Satoshi NAKASHIMA, Junichi ODAGIRI, Hideki TOMIMORI ...
    Article type: Article
    Session ID: HI2014-37/3DIT2014-1
    Published: February 25, 2014
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Eye movements are known to be deeply related to a person's emotional state. Gaze tracking is a contactless method to detect these, commonly using the spatial relationship between the pupil and corneal reflection. However, it does not perform robustly when the user is wearing eyeglasses since light reflected from the surroundings changes the appearance of the pupil. In this research we propose and evaluate a pupil detection method that can perform robustly even in the presence of such reflection.
    Download PDF (3823K)
  • Eiji WATANABE, Takashi OZEKI, Takeshi KOHAMA
    Article type: Article
    Session ID: HI2014-38/3DIT2014-1
    Published: February 25, 2014
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In various interviews, the evaluation by the interviewer is affected by not only the reply by the applicant but also the behavior by the applicant. Moreover, the evaluation by the interviewer is communicated with the applicant and the interaction between the behaviors by the interviewer and the applicant occurs. In this report, we discuss the relation between the evaluation by the interviewer and the behavior by the applicant.
    Download PDF (3499K)
  • Daisuke NOGUCHI, Takeshi KOHAMA, Sho KIKKAWA, Hisashi YOSHIDA
    Article type: Article
    Session ID: HI2014-39/3DIT2014-1
    Published: February 25, 2014
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    The purpose of this study is to evaluate the effect of visual attention on the dynamics of drift eye movements objectively. We proposed a signal processing method to separate drift eye movements and microsaccades from fixation eye movements in the experiments which control attentional concentration to the foveal region, and compared the properties of extracted drift eye movements between the conditions of attentional intensities. In our method, microsaccades are detected by using an order-statistic low-pass differentiation filter. The start- and end-point of each microsaccades is discriminated by a discrete pulse transform analysis, then microsaccades are removed from the data. These gaps are filling through the use of an autoregressive model to extract pure drift eye movements. After the extraction of drift eye movements, we analyzed the frequency components and mean-square displacements to examine the fluctuation properties of the drifts. As results, drift eye movements were not influenced by the foveal attention allocation, rather affected by diffusing attention to the peripheral visual field.
    Download PDF (674K)
  • Kazuho FUKUDA, Ai NUMATA, Keiji UCHIKAWA
    Article type: Article
    Session ID: HI2014-40/3DIT2014-1
    Published: February 25, 2014
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    The maximum luminance of an object color is physically restricted by its illuminant condition. To understand whether human color vision behaves appropriately for this physical limitation when judging color appearance mode, we showed the relationship between the luminosity threshold for surface color mode perceptions and the luminance-chromaticity distribution of surrounding colors in a psychophysical experiment, then compared its result with the calculated physical limitation of object colors under given illuminant conditions. The results showed significant correlation between them, suggesting that human color vision has acquired a mechanism appropriate for the physical limitation of objects colors.
    Download PDF (3914K)
  • Teruki Konishi, Takeshi KOHAMA
    Article type: Article
    Session ID: HI2014-41/3DIT2014-1
    Published: February 25, 2014
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Previous saliency-based visual attention models dealt with the mechanism underlying attentional shifts in the early visual system. The data from psychophysical experiments indicate that singletons defined by depth easily pop out, indicating that the saliency of depth information is implicated in visual information processes. However, depth information is not considered in the calculations for saliency maps. In this study, we modified a saliency-based model with regard to using depth information by using the disparity energy computation for calculating the saliency maps for stereo images. Simulation results indicate that proposed model can calculate disparity distribution maps to discriminate objects which pop out in depth. Furthermore, the results for a stereo image input which has multiple depth objects, the proposed model searches each object by referring to the region of shared depth and shift the gaze points from object to object on the basis of depth saliency.
    Download PDF (8789K)
  • Yuzuru MORIMOTO, Takeshi KOHAMA
    Article type: Article
    Session ID: HI2014-42/3DIT2014-1
    Published: February 25, 2014
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    The purpose of this study is to construct a mathematical model which predicts saliency regions in high-speed egocentric-motion movies by reproducing the properties of the area MST neurons' receptive fields. MST neurons detect global motions such as expansion, contraction, rotation, and so on. Our proposed model is based on a previous model which reproduces center-surround properties of the area MT neurons' receptive fields. MST neurons in the model integrate the responses of the MT neurons by convolving with spacial weight functions of which central portions are biased to particular direction. The simulation results suggest that the modeled MST neurons can detect expansive motions from dynamic random-dot patterns and behave like the effective field of view. The results for the movies which were taken in a running vehicle indicate that the proposed model detects more salient objects around the vanishing point than the previous model. Furthermore, salient regions computed by our model are well similar to our experiential impression of the input movies.
    Download PDF (15164K)
  • Takahiko FUKINUKI
    Article type: Article
    Session ID: HI2014-43/3DIT2014-1
    Published: February 25, 2014
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In movies and TV, an image with motion is expressed by sampling it temporally with e.g. 60 frames/sec, which is called "sampled motion." As for the question why the image expressed by the sampled motion looks to move smoothly, most people answer "By after-image." Some psychologists answer "By apparent motion." The author has been explaining that an original image is obtained (de-modulated) from the temporally-sampled image through the low-pass-filtering of the eye system. The point in this paper is how to explain it in education. He first denies the effects by after-image and apparent motion, and then explains it by comparing it with well-known 1D audio sampling. This discussion is closely related to the bases of psychology, and is valuable from this standpoint.
    Download PDF (5456K)
  • Article type: Appendix
    Pages App1-
    Published: February 25, 2014
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (82K)
  • Article type: Appendix
    Pages App2-
    Published: February 25, 2014
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (82K)
  • Article type: Appendix
    Pages App3-
    Published: February 25, 2014
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (82K)
feedback
Top