-
Pages
Cover1-
Published: 2021
Released on J-STAGE: May 26, 2021
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Pages
Cover2-
Published: 2021
Released on J-STAGE: May 26, 2021
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Masahiro MIYATA, Koichi OKADA, Yasushi TAUCHI, Yoshiki MIZUKAMI
Session ID: HI2017-53,3DIT2017-9
Published: 2021
Released on J-STAGE: May 26, 2021
CONFERENCE PROCEEDINGS
FREE ACCESS
In this research, for the purpose of supporting teachers’ class management, we develop a new class support system with the following three functions, 1) attendance management function that is highly consistent with seating status of each student throughout the class by enabling the attendance registration to be visually compared with the students’ seating status, 2) student nomination function that can enhance students’ interest for the class by attracting their attentions, 3) seating history function that makes it possible to visualize the distribution of their past attendance position.
View full abstract
-
Kohei UEHARA, Yasushi TAUCHI, Yoshiki MIZUKAMI
Session ID: HI2017-54
Published: 2021
Released on J-STAGE: May 26, 2021
CONFERENCE PROCEEDINGS
FREE ACCESS
In this research, we propose a virtual try-on system with clothes models generated from photographed clothes images. After designating the type of clothes, the grid points on the clothes region are acquired as vertex coordinates constituting the front mesh of the clothes. Delaunay triangulation is applied to the grid point coordinates and the back mesh is generated from the front mesh. The grid points on the contour of the front and back meshes are connected, but the grid points on the wrist bands, the bottom of the neck and so on are not connected. Finally, the clothes model is obtained by generating the texture on the connected mesh from the clothes image. In the proposed virtual try-on system, the clothes models are rendered to give the coordinate representation on the user’s posture obtained by a depth camera.
View full abstract
-
Yuma NAKAI, Takeshi KOHAMA, Shinichi UEDA, Kohta MATSUDA, Aya KADONO, ...
Session ID: HI2017-55
Published: 2021
Released on J-STAGE: May 26, 2021
CONFERENCE PROCEEDINGS
FREE ACCESS
In order to quantitatively evaluate the influence of voice operation, we measured the steering accuracy for the dynamic random pattern which abstracting the car driving with presence or absence of a verbal fluency task. We simultaneously measured the eye movements during this task, and evaluated the free-view gaze point distributions and characteristics of fixation eye movements. The results show that the steering error increases when performing the verbal fluency task, and the distributions of gaze points became locally restricted. Moreover, it indicates that in the case of maintaining gaze on the fixation crosshair, the degree of fixation instability increases, and also the number and amplitude of microsaccades increase when the verbal fluency task have applied. These results suggest that the influences on the steering accuracy caused by the verbal fluency tasks might be due to restriction of natural gaze behaviors and concentrations of attention to the visual target.
View full abstract
-
Mitsuhiro KODAMA, Takeshi KOHAMA
Session ID: HI2017-56
Published: 2021
Released on J-STAGE: May 26, 2021
CONFERENCE PROCEEDINGS
FREE ACCESS
The purpose of this study is to construct a mathematical model which predicts saliency regions in high-speed egocentric motion movies. The proposed model was considered the spatial differentiation by the center-surround organization of receptive fields, time response characteristics which generate visual adaptation, and motion feature extraction by higher order motion detection mechanism in the middle temporal area(MT) and the medial superior temporal area(MST). We simulated by using high-speed egocentric-motion movies, filmed by an embedded camera in a driving vehicle, and compared it with human scan-path data. The simulation results indicate that the proposed model detects more salient objects around the vanishing point than the conventional saliency based model. The moving-NSS scores, the index of prediction-accuracy of focal attention area, are significantly higher in our model than those of the conventional saliency estimation models. These results suggest that the proposed model is able to predict the focal attention area similar to the human visual perception in the high-speed self-motion image.
View full abstract
-
Tomoya TOYOTA, Yasushi TAUCHI, Yoshiki MIZUKAMI
Session ID: HI2017-57
Published: 2021
Released on J-STAGE: May 26, 2021
CONFERENCE PROCEEDINGS
FREE ACCESS
While advanced driver assistance system becomes popular in these days, further improvement of detection and recognition accuracy of cars using in-vehicle cameras is required. If it is possible to estimate the orientation of the car in the image, the direction of the car movement will be able to be predicted and be helpful for developing safer advanced driving support systems. In this study, we propose an estimation method for car orientation in the image using convolutional neural networks. The proposed method employed image pre-filtering processes for sharpening the images and both of batch normalization and dropout for preventing gradient loss and over-fitting in the learning procedure.
View full abstract
-
Keita Minamiyama, Mie Sato, Miyoshi Ayama
Session ID: HI2017-58
Published: 2021
Released on J-STAGE: May 26, 2021
CONFERENCE PROCEEDINGS
FREE ACCESS
Recently high-quality images attract attention, and displays that can present high-quality images are on the market. Relations between high-quality images and impressions on theses displays have been actively studied. Our previous work examined relations of impression changes of high-gradation image display and the gamma characteristics. They showed that the impressions were increased with some high-gradation images, however there were also images on which the impressions were not increased because of viewer’s gaze behavior. In this study, we examine relations between impression changes, gaze behavior and image features in high-gradation image display.
View full abstract
-
Ryota OKAMOTO, Aoi KOBAYASHI, Takeshi KOHAMA, Hisashi YOSHIDA
Session ID: HI2017-59
Published: 2021
Released on J-STAGE: May 26, 2021
CONFERENCE PROCEEDINGS
FREE ACCESS
In recent years, functional near-infrared spectroscopy (fNIRS) measurement technique that can measure brain function unconstrained in a normal laboratory environment attracts attention. However, the skin blood flow fluctuations due to various factors which are easily misunderstood as brain activities, are mixed as large artifacts in fNIRS signals. When measuring brain activity by fNIRS, sufficient consideration is necessary for the designs of experiments and analysis methods. In this study, in order to examine the experimental conditions and analysis procedures suitable for brain function measurement by fNIRS, event-related fNIRS measurement was performed when a subject was observing blinking checkered fag patterns. The brain function components were extracted from the measurement fNIRS signals by using a blood flow dynamics model and applied a model based analysis method using a general linear model. The results suggest that the activation of the primary visual cortex and the secondary visual cortex is detected in synchronization with the presentation of the visual stimulus. This indicates that the proposed method is effective to measure brain functions by using fNIRS.
View full abstract
-
Wakiko MAEMURA, Ichiro KURIKI, Kazumichi MATSUMIYA, Satoshi SHIOIRI
Session ID: HI2017-60
Published: 2021
Released on J-STAGE: May 26, 2021
CONFERENCE PROCEEDINGS
FREE ACCESS
Brain activity in response to color stimuli was measured by fMRI. The colors were either cardinal hues of cone-opponent color space or unique hues. Cardinal hues are defined by physiological properties of cone-opponent cells and are known to be fundamental hues in early stage of visual area. Unique hues are known to be the fundamental hues of color appearance. The response patterns of the visual areas V1/V4 were compared. Correlations of brain activity patterns across trials were significantly higher for unique hues, when compared between runs with letter (two-back) task and color (identification) tasks. The result implies that unique hues are better represented in visual areas when the subjects were paying attention to identification of the stimulus hue.
View full abstract
-
Weijing REN, Ichiro KURIKI, Kazumichi MATSUMIYA, Satoshi SHIOIRI
Session ID: HI2017-61
Published: 2021
Released on J-STAGE: May 26, 2021
CONFERENCE PROCEEDINGS
FREE ACCESS
To investigate the interactions of color and luminance motion signal in human cortex, we investigated the motion aftereffect by using functional MRI (fMRI) technique. Direction selectivity of the responses were investigated by comparing BOLD responses to test stimuli moving in either same or opposite direction, defined by either color or luminance. Our results after adapting to isoluminant color motion exhibited increased response to the same direction of adaptation in areas V1, V3, and V4. Factors of residual luminance and apparent slower speed in color motion were examined, but they did not explain the higher response to the luminance stimuli moving same direction of motion as color adapting stimuli.
View full abstract
-
Pages
37-
Published: 2021
Released on J-STAGE: May 26, 2021
CONFERENCE PROCEEDINGS
FREE ACCESS