-
Pages
11A-0-
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Takumi MASUDA, Yoshitsugu MANABE, Noriko YATA
Session ID: 11A-1
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
We have proposed an interior simulation system by DR and AR. However, this system has unnatural appearance system because of remaining shadow of real object after DR. This paper proposes improvement method of the unnatural appearance by detecting shadow of real object, and estimating position of light sources.
View full abstract
-
Kazushige SHIMADA, Yoshitsugu MANABE, Noriko YATA
Session ID: 11A-2
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper proposes AR third person view for VR training. In VR training using user's hands, it is difficult to evaluate the training because the evaluator in real world can watch user's hands and cannot VR objects. So, we try to visualization VR objects for the evaluator using AR.
View full abstract
-
Junichi HAGINOYA, Yoshitsugu Manabe, Noriko YATA
Session ID: 11A-3
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
Conference systems for between remote places are proposed in various methods until today. However most of methods use CG avatar and project images on screen, and the results lose the reality. This study proposes AR-based conference system and tries to improve reality.
View full abstract
-
Shunta OHSAKAMA, Yoshitsugu MANABE, Tomokazu USHITANI, Noriko YATA
Session ID: 11A-4
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper proposes a system to study spatial search experiment using VR. Experiments using this system may be able to reveal most important information when people search space. And this study leads to examine the usefulness of VR in cognitive experiments.
View full abstract
-
Pages
11B-0-
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Hayato WATANABE, Masahiro KAWAKITA, Naoto OKAICHI, Hisayuki SASAKI ...
Session ID: 11B-1
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
We propose an integral imaging system using a locally controllable point light source array. Resolution characteristics and optimal point light source size were analyzed by the numerical simulation and experiments. Additionally, we confirmed the applicability for enhancing viewing angle in time-division and partially switching 3D and 2D images.
View full abstract
-
Ayuki HAYASHISHITA, Hideki KAKEYA
Session ID: 11B-2
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
We propose an autostereoscopic display with a slanted parallax barrier that can be shifted by 1/2 subpixel unit. With this method, the viewing zone without crosstalk can be enlarged.
View full abstract
-
Hideki KAKEYA
Session ID: 11B-3
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
We propose a super-multiview autostereoscopic display with full HD resolution based on time-division multiplexing parallax barrier technology. 9 views are generated around each eye by using sextuplexing time-division multiplexing where 3 colors correspond to different directional views.
View full abstract
-
Takaaki UENO, Yasuhiro TAKAKI
Session ID: 11B-4
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
A near-eye display is a key device for virtual and augmented reality. The present study proposes a super multi-view near-eye display to solve the vergence-accommodation conflict that causes visual fatigue. A ferroelectric liquid crystal display and an LED array are combined to generate dense viewpoints two-dimensionally by the time-multiplexing manner.
View full abstract
-
Pages
11C-0-
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
-
CLIPPINGDALE Simon , Takahiro MOCHIZUKI, Masanori SANO, Mutsumi SUG ...
Session ID: 11C-1
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
A Kinect sensor was used to detect small changes in facial expression in experimental subjects watching test videos. Experimental results show that a detector based on a subset of the facial feature points tracked by the sensor performs better than one based on the “animation unit” (AU) facial deformation features that the sensor also produces.
View full abstract
-
Ryo Inoue, Tomio Goto, Satoshi Hirano
Session ID: 11C-2
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
We propose an authenticity inspection method for brand items by utilizing image matching, and confirm the effectiveness of using structure components by utilizing the Total Variation regularization, at detection. Experimental results show that inspection accuracy is improved to remove background components by combining the TV regularization and a sharpening filter.
View full abstract
-
Takahiro FUKUSHIMA
Session ID: 11C-3
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
We have conducted a basic survey on non-speech information (NSI) subtitles in Japanese and English of a movie (GODZILLA, 2014), and found Japanese NSI subtitles include many speaker identification, 1.66 times more than English, while English subtitles have a good amount of other types of NSI such as sound effects with relatively smaller number of speaker identification subtitles.
View full abstract
-
Shoei SATO, Hideki SUMIYOSHI, Atsushi IMAI, Yuko YAMANOUCHI, Nobumasa ...
Session ID: 11C-4
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
A classification of modifiers complementing play-by-play TV sports commentaries is described. By comparing utterances of TV and radio commentary, sorts of information are extracted so as to enrich our automatic audio description.
View full abstract
-
Pages
12A-0-
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Misato TANIMIZU, Yoshitsugu MANABE, Noriko YATA
Session ID: 12A-1
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
In making a pattern paper for stuffed toys, it takes a lot of trouble because of repeating making a trial piece and modifying it. This paper proposes a method which detects the parts needed from the 3D data and design a pattern paper based on the detected parts.
View full abstract
-
Motonobu KAWASHIMA, Mikio SHINYA
Session ID: 12A-2
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
We propose a fast rendering techniques of cartoon outline for commercial Anime production. We can generate varieties of toon lines using single post process with our method.
View full abstract
-
Takayuki TSURUTA, Yoshitsugu MANABE, Noriko YATA
Session ID: 12A-3
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
In the previous study of HDR spectral image measurement system, a false contour was appeared around the pixel by HDR processing. We solve this problem and try to realize processing from measurement to display HDR spectral stereo image.
View full abstract
-
Yutaro Ohyama, Yoshiaki Shishikui
Session ID: 12A-4
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
We verified image impression enhancement effect due to the introduction of HDR and WCG through subjective evaluation test. The results indicate that the two factors work additively for impression enhancement and HDR seems to be a dominant factor.
View full abstract
-
Maya TAKATSUKA, Noriko YATA, Yoshitugu MANABE
Session ID: 12A-5
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
Digitized picture doesn't have texture information such as the luster. Therefore its visual appearance can't be changed. This study proposes a method of viewpoint estimation by AR marker for representing texture of big picture, and tries to represent texture.
View full abstract
-
Pages
12B-0-
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Mai Shikita, Kohei Inoue, Kenji Hara, Kiichi Urahama
Session ID: 12B-1
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
We propose a method for estimating spectral reflectance from tristimulus values observed. We use the fact that the real spectral reflectance is continuous. We formulate the estimation problem as a minimization of spectral reflectance difference between neighboring wavelengths, and show the analytic solution. Experimental results show that the proposed method improves the estimation accuracy compared with the previous method.
View full abstract
-
Harumi KAWAMURA
Session ID: 12B-2
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
A method for estimating colors of scene illuminants from an image under with several illuminants is proposed. The proposed method applies the gray-world-assumption based method with judgement, whether each small region in the image satisfies the gray-world assumption, to each small region in the image.
View full abstract
-
Suguru ARIGA, Yoshitugu MANABE, Noriko YATA
Session ID: 12B-3
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
In recent years, an enormous amount of printed material has been imaged, but the image may have a different appearance from the original printed material by geometric distortion arising from folds. This study attempts to correct the geometric distortion by detecting folds from the image taken by the camera.
View full abstract
-
Tsukasa HIRABAYASHI, Noriko YATA, Yoshitsugu MANABE
Session ID: 12B-4
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
The mechanism of color constancy has not been made clear yet. If color constancy can be realized on computer, it should be helpful for elucidation of color constancy. This paper proposes a method of realize color constancy by machine learning using physics based rendering images.
View full abstract
-
Yangping Shen, Yoshitsugu Manabe, Noriko Yata
Session ID: 12B-5
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
The normal SLAM system can only build the environment map without any semantic information, which is insufficient in some specific application. This paper studies about fusing object recognition into SLAM system, and aims at building a semantic SLAM system.
View full abstract
-
Tomoki IMAZEKI, Yusuke KAMEDA, Tomokazu ISHIKAWA, Ichiro MATSUDA, ...
Session ID: 12B-6
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
We previously proposed a method to estimate optical flow based on corresponding region detection using bidirectional motions (forward and backward optical flows). In this paper, we improve the estimation accuracy by performing the detection even in upper level of the multiresolution processing for the optical flow estimation.
View full abstract
-
Pages
12C-0-
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Kodai KIKUCHI, Kazunori MIYAKAWA, Toshio YASUE, Hiroshi SHIMAMOTO, ...
Session ID: 12C-1
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
A gradation control method for Ag-electrodeposition based optical devices including a pulsed standby voltage and a stepwise driving voltage was developed. The proposed method provided stable maintenance of transmittance and realized well improvement in response time.
View full abstract
-
Hirofumi MORIOKA, Shinsuke YOKOZAWA, Hideki MITSUMINE
Session ID: 12C-2
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
We propose a method that creates trajectories of flying objects. This method is derived from our 2.5D Multi Motion technology and efficient to detect trajectories of several similar objects that move at close range or go across each other. We implemented this method and utilized it in live TV programs.
View full abstract
-
Masanobu FUJIWARA, Yugo MINOMO, Tsukuru KAI
Session ID: 12C-3
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
Last year, our developed real-time 4K video quality evaluation equipment which is able to derive the approximate subjective assessment test results that correlate highly with those of the results compliant with the DSCQS(Double Stimulus Continuous Quality Scale) method was reported. This paper describes the effectiveness of our proposed algorithm for both SDR and HDR video with 4K resolution presentation by comparing with the actual subjective assessment test results.
View full abstract
-
Atsushi Arai, Daiichi Koide, Yoshiro Takiguchi
Session ID: 12C-4
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
We developed a 120-Hz time code signal transmission format over an AES3 interface. We developed a time code reader/generator prototype for the format and confirmed the effectiveness of the time code transmission over AES3.
View full abstract
-
Takehiro SUGIMOTO, Tomoyasu KOMORI
Session ID: 12C-5
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
A tone compensation method to downmix a 22.2 ch sound to the audio formats having less channels is proposed. Proposed method compensates the energy spectrum of the downmixed sound according to that of the 22.2 ch sound with downmixing coefficients by suppressing the spectral difference from the 22.2 ch sound.
View full abstract
-
Takeshi MISHIMA, Manon ICHIKI, Aiko HAGIWARA, Hitoshi ITO, Shoei ...
Session ID: 12C-6
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
We developed a transcription system using speech recognition technologies. It’s interface for correcting recognition errors efficiently by mapping recognition results with video footage. In order to evaluate this system, we started an experimental verification.
View full abstract
-
Pages
13A-0-
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Yoshitaka KIDANI, Kei KAWAMURA, Sei NAITO
Session ID: 13A-1
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
Modified strong intra smoothing filter for a non-square block is proposed. Subjective evaluation results show no video quality improvement with/without SIS filter by JEM where pseudo contour artifacts are confirmed by HEVC. It indicates that the linear filtering is insufficient to reduce the artifacts.
View full abstract
-
Shota KASAI, Yusuke KAMEDA, Tomokazu ISHIKAWA, Ichiro MATSUDA, Sus ...
Session ID: 13A-2
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
Our previous method of interframe prediction for depth map coding performs motion compensation of objects in a depth direction based on the apparent motion estimation in a spatial direction. In this paper, we introduce a scene flow estimation technique into the method to improve the accuracy of the depth compensation.
View full abstract
-
Yuta NAKAHARA, Toshiyasu MATSUSHIMA
Session ID: 13A-3
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
In this study, we describe generative models of images as stochastic models by looking at the previous studies such as predictive coding from different point of view. Then we construct lossless image compression which is optimal under the Bayes criterion for the generative models, and consider its properties.
View full abstract
-
Tokinobu MITASAKI, Kazuya HAYASE, Atsushi SHIMIZU, Kazuyuki HIOKI
Session ID: 13A-4
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
We introduce a software of change point detection between images. It is useful for a lot of services such as map available services and monitoring services.
View full abstract
-
Kazuya HAYASE, Tokinobu MITASAKI, Atsushi SHIMIZU, Kazuyuki HIOKI
Session ID: 13A-5
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper proposes a novel method of fast difference estimation between satellite images which have a geographical common area. This method estimates some differences using coding information of target images without decoding all of their compressed data. It is shown that it can be done faster than decoding.
View full abstract
-
Yuya SHINOHARA, Kenji KANAI, Jiro KATTO
Session ID: 13A-6
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper, to provide high-quality 360-degree DASH video delivery, we evaluate an efficient adaptive rate control method by considering user’s field of view. In this method, we assume field of view as a Region of Interest (ROI) and apply a ROI-based encoding technique. Evaluation results conclude that this method achieves high PSNR values of field of view images.
View full abstract
-
Tatsuya NAGASHIMA, Kenji KANAI, Jiro KATTO
Session ID: 13A-7
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper, to improve Quality of Experience (QoE) for DASH video delivery, we propose a QoE-based bandwidth allocation by using OpenFlow. We formulate the network resource allocation problem as an optimization problem, and control the network resources by using an OpenFlow Ryu controller. Evaluation results conclude that the proposed method achieves the fair and high-quality DASH video delivery.
View full abstract
-
Pages
13B-0-
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Hiroki Watabe, Hiroshi Watanabe
Session ID: 13B-1
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper, we propose a method that combine YOLO and CNN for pet breed identification. By the experiment, we confirmed the validity of our method that shows more than 10% better score compared with the one using only YOLO.
View full abstract
-
Kotaro KINOSHITA, Noriko YATA, Yoshitsugu MANABE
Session ID: 13B-2
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper proposes a method using controller by CNN to optimize convolution layer of CNN for recognition. This optimization means minimizing amount of parameter for convolution layer and keep enough accuracy of recognizer.
View full abstract
-
Jianfeng XU, Kazuyuki TASAKA, Hiromasa YANAGIHARA
Session ID: 13B-3
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
Serving as the base of many advanced networks for human action recognition in videos, two-stream ConvNets have shown strong performance on a commonly used dataset, UCF101. This paper statistically analyzes the late fusion strategy in two-stream ConvNets from a frame level to a video level. We report the characteristics of the temporal domain on a frame level, which is called as a domino-like effect and explains well why an effective temporal fusion is difficult to design. For the fusion of two streams, it has been reported that a proper weight can improve performance substantially. However, here we will provide a different reason from the original report and propose a method for calculating an effective weight.
View full abstract
-
Qing YU, Masashi ANZAWA, Sosuke AMANO, Toshihiko YAMASAKI, Kiyohar ...
Session ID: 13B-4
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper, we proposed a method to improve food image recognition accuracy by extracting better deep features with the latest deep learning network considering mealtime and distributed representation. We also evaluated these features by a personalized classifier.
View full abstract
-
Tomoyuki TAKANASHI, Keita HIRAI, Takahiko HORIUCHI
Session ID: 13B-5
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
Technologies on facial expression recognition have been widely applied to machines and robotics. According to these developments, facial expression recognition systems using artificial intelligence (deep learning) have been proposed in recent years. However, researches on facial expression recognition system based on fine facial image database are not well investigated. In this study, we developed facial image database of good data through subjective evaluation experiments for big data of SNS facial images. The experimental results show that the small good data was superior to the big data in the performance of facial expression recognition based on deep learning.
View full abstract
-
Tin Nilar Lin, Hiroshi Watanabe
Session ID: 13B-6
Published: 2017
Released on J-STAGE: October 22, 2022
CONFERENCE PROCEEDINGS
FREE ACCESS
Weather data estimation is one of the most important issues for disaster management. As the size of the data, period of observation time and the number of influence factors become large and the model will be complex. In this paper, we focus on selecting the optimal feature for reducing data, time and computational complexity without suffering the accuracy. Experimental result shows that our proposed approach can select the valid features and have a good accuracy with smaller dataset and reduce the computational time.
View full abstract