-
Junpei YAMAGUCHI, Keisuke IRIKURA, Mayumi NAKANO, Yuta ITAHASHI, Haruo ...
Article type: Article
Session ID: 10-1
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
A lecture captioning system that converts teachers' voices into captions in real-time with the use of speech recognition technology and transmits them by wireless LAN to hearing-impaired students has been developed. This system is an effective means of securing information for hearing-impaired students as it does not require reciters who need special training.
View full abstract
-
Toshiyuki MASUDA, Norihumi WATAI, Sumie SAITO, Eiichi MIYASAKA
Article type: Article
Session ID: 10-2
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Perceptual effects of visual cues on auditory images were investigated with eight loudspeakers and lamps perpendicularly set up. The results show that the auditory image produced through a loudspeaker will be influenced by the visual stimulus when it is positioned between just above and below the loudspeaker.
View full abstract
-
Jumpei FUJIKAWA, Takeshi KOHAMA, Yoshimi Kamiyama
Article type: Article
Session ID: 10-3
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In this study, we propose a mathematical model of the saccadic system based on neuronal activity in superior colliculus and basal ganglia. Fixation neurons and build-up neurons in superior colliculus show antagonistic action and build-up neurons start to discharge sharply after termination of fixation neurons' activity. Our model can reproduce these properties of neural responses.
View full abstract
-
Haruka TAKENAKA, Takeshi KOHAMA, Naohiro TODA
Article type: Article
Session ID: 10-4
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We examined whether the visual attention influenced the drift eye movement based on the statistical analysis. The results show that the frequency component of about 3-4[Hz] significantly decreased when attention was concentrated in the foveal area. This suggests that the concentration of attention onto the gazing object might enhance the stability of fixation.
View full abstract
-
Ichiro UMATA, Noriko SUZUKI, Hiroshi ANDO, Naomi INOUE
Article type: Article
Session ID: 10-5
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
A preliminary study is done to examine the effect of 3D images in remote communication. The results suggest that 3D images serve to draw users' lively interaction with non-verbal behaviors in remote communication.
View full abstract
-
Kyoko ARIYASU, Mikihiro UENO, Yasuaki KANATUGU
Article type: Article
Session ID: 10-6
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
"TV program reconstruction system" is an AdapTV's application that customizes a TV program depending on user's level. Here we report an evaluation experiment for the TV program reconstruction system.
View full abstract
-
Akiko KIMURA, Eiichi MIYASAKA
Article type: Article
Session ID: 10-7
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Ways of insertion of TV advertisement (CMs) were investigated for news programs in Japan, US and UK. The results show that silent periods were commonly inserted between a main program and the related CMs in US and UK, while programs were suddenly switched to CMs in Japan.
View full abstract
-
Yiju Wu, Shigeru SHIMAMOTO
Article type: Article
Session ID: 10-8
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper aims to explore the feasibility of embedding QR Code in sound watermarking and conducting simultaneous linkage of hidden information through the principles of psychological and physiological models. The research tried to embed more information in digital sound watermarking in the same package and achieve the purpose of cost saving. In the research, we also tried to realize the theory through Visual Basic and observed that different type of music would indirectly affect the identifiably of sound watermarking. The entire software was evaluated on the level of usability according to Step 2001 Rule.
View full abstract
-
Article type: Appendix
Pages
App11-
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Kenji OOUCHI, Nobuyuki UMEZU
Article type: Article
Session ID: 11-1
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
A TableTop Interface (TTI) is presented to realize more intuitive operation. Users interact with the system by gestures over the projected images on the table, not by using ordinary keyboard nor mouse. Multiple cameras are used for the volume data extraction and depth calculation.
View full abstract
-
Keiichi KAWABE, Masaki ISHII, Makoto NISHIDA
Article type: Article
Session ID: 11-2
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
The recognition of facial expression attracts interest from the viewpoint of man-machine communication. This study, focuses on the local characteristic of expression and proposes a method for extracting the intermediate expression.
View full abstract
-
Kou SAI, Makoto NISHIDA, Yoichi KAGEYAMA, Kenta OOTAKI
Article type: Article
Session ID: 11-3
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This study is concerned with extraction of human regions in images in order to develop an image composition method. To extract human regions in color images, it is necessary to estimate human positions in advance. Therefore, we propose a method to estimate human positions.
View full abstract
-
Ryuuichi SAGA, Makoto NISHIDA, Yoshiyuki SATO
Article type: Article
Session ID: 11-4
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
The speech input is much influenced by environmental sounds in practical use, unlike lip motions. Therefore, we conducted a basic study of noncontact command input interface using lip motion features. The results suggest that the proposed method is good at detecting conditions of command input.
View full abstract
-
Naohito NAKANE, Yuki KONDO, Yoshinobu EBISAWA
Article type: Article
Session ID: 11-5
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We have developed an eye gaze tracker with easy calibration. In the tracker, stereo cameras detected the 3D pupil position, and the narrow view camera (NVC) turned toward the pupil. The eye gaze was basically determined by the relative positions of the corneal reflection and the pupil in the NVC image. In this study, to improve the practicality, the NVC function is involved in the stereo cameras. The system can detect the lines of sight of both eyes simultaneously.
View full abstract
-
Satoshi HANDA, Aya NAKASHIMA, Yoshinobu EBISAWA
Article type: Article
Session ID: 11-6
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Some of the severely handicapped people can move the eyes only. In this study, as a communication tool for such people, the head mounted display which can detect the eye gaze point on the display screen was developed. The display has an advantage that the precision of eye gaze point detection is kept while the relative position between the eye and the finder changes. The experimental results showed the expected precision.
View full abstract
-
Hiroshi FUCHI, Aya NAKASHIMA, Tomoki HAYASHI, Yoshinobu EBISAWA
Article type: Article
Session ID: 11-7
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Development of the doze driving prevention device is desired. There some previous reports that the pupil diameter reduces when the subjects become sleepy. In this study, the front view picture recorded when a car had run was shown to subjects. Variations of the area, horizontal and vertical diameters of pupils were measured. Blink frequency and eye closure time ratio, estimated by pupil detection, were also measured. The result showed that the pupil area reflects sleepiness most in those indexes.
View full abstract
-
Noriaki TAKADA, Seiki INOUE
Article type: Article
Session ID: 11-8
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We are developing an easy-to-use communication system using simple and inexpensive sensors. This paper describes Japanese finger character recognition.
View full abstract
-
Yohei HAGA, Aya NAKASHIMA, Masanari IWATA, Yoshinobu EBISAWA
Article type: Article
Session ID: 11-9
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
The device for monitoring the face pose is useful for prevention of looking aside while driving, and so on. In our former study, the face pose was determined by using 3D positions of the pupils and nostrils detected by the stereo cameras. This paper shows that the single camera system can determine the face angle precisely by using the distances between the pupils and nostrils, which were measured beforehand, as the restricted conditions.
View full abstract
-
Article type: Appendix
Pages
App12-
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Tomoaki TAKANO, Hironori YAMAUCHI
Article type: Article
Session ID: 12-1
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper presents a method for face recognition based on Gabor features and Linear Discriminant Analysis.
View full abstract
-
Sho UMEDA, Hironori YAMAUCHI
Article type: Article
Session ID: 12-2
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper describes the way to estimate the face angle from the facial image to use it in the face recognition part.
View full abstract
-
Shougo TAKAMURA
Article type: Article
Session ID: 12-3
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Many students study about detecting face and eyes from images. As one of problem detecting eyes, the eyes can't be detected correctly if the face tilts once in a while. Ring-Filter solves the problem.
View full abstract
-
Yasutaka KUSAO, Hironori YAMAUCHI
Article type: Article
Session ID: 12-4
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper describes a system for extracting the facial organs, the eye, eyebrow and mouth, by using active contour model called Snakes.
View full abstract
-
Manabu OKAWA, Minoru MARUYAMA
Article type: Article
Session ID: 12-5
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper presents strategies for off-line writer verification and focuses on the normalization methods. The result of the experiments shows that the normalization of the character image affects the correct verification rate.
View full abstract
-
Satoshi UENO, Kenji MATSUO, Masayuki HASHIMOTO, Atsushi KOIKE, Naoyuki ...
Article type: Article
Session ID: 12-6
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper, we propose a new method of the similar image retrieval for gastric biopsy. In addition to the conventional features, we use the features of color histogram and DCT. By a subjective experiment, we show that the proposed method increases precision compared to the conventional method.
View full abstract
-
Kenji MATSUO, Masayuki HASHIMOTO, Atsushi KOIKE
Article type: Article
Session ID: 12-7
Published: November 27, 2007
Released on J-STAGE: June 05, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper proposes new comparison measurements for face identification based on local binary pattern. The local binary pattern shall be compared bit by bit and the conventional euclidian distance is useless. Simulation results confirm the proposed measurements achieve better performance in face identifications under various illumination conditions.
View full abstract