-
Pages
15B-0-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
-
Kazunori MIYAKAWA, Kodai KIKUCHI, Toshio YASUE, Hiroshi SHIMAMOTO, Tak ...
Pages
15B-1-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
To achieve an active optical filter for television cameras, we created a prototype of a metal salt
precipitation-type electrochromic dimming element. We found that the transmittance could be reversibly varied in
response to an applied voltage, and that flat optical transparency could be obtained across the entire visible light
region.
View full abstract
-
Ataru KOIKE
Pages
15B-2-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
This platform is made up of storage virtualization technology, contributing to the improvement of News & Sports
workflow efficiency. This time, I will detail the devising and practical application of this platform.
View full abstract
-
Masaya IKEO, Kinji MATSUMURA, Hiroshi FUJISAWA, Masaru TAKECHI
Pages
15B-3-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
Since Hybridcast Service
s
has started in
September
2013
,
b
roadcasters and
a
pp
lication
d
evelopers have
straggled
to
develop
H
ybridcast applications and
overcome
the differences of the
execution
performance
of
each
browser
for
various H
ybridcast applications
on
each TV receiver
.
In order to
help that
development,
we
have
developed
some prototype
Hybridcast
benchmark
test
applications
to evaluate Hybridcast HTML5 browser performance on each
receiver
.
This pap
er
describes
the
approach
to design the Hybridcast
benc
hmark
test
applications
.
View full abstract
-
Pages
15C-0-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
-
Kotaro KINOSHITA, Noriko YATA, Yoshitsugu MANABE
Pages
15C-1-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
When we get Convol
utional Neural Network by transfer learning
,
this network has some filter that
raise
incorrect
recognition rate
.
T
his paper
propose
s
a
method to select optimal filter
and remove unnecessary
filter
.
View full abstract
-
Kentaro HRIABAYASHI, Tomoaki NAKAMURA, Masahide KANEKO
Pages
15C-2-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
This paper propose
s
a robust automatic extraction method of mouth
area
from facial image
by implementing the
Convolutional Neural Network (CNN)
-
based
classification of each pixel.
This
will help
to
draw
expressive portrait automati
cally
over wide
-
ranging facial images.
I
t is possible to expand
the
variety
of communication.
View full abstract
-
Kazuto SUZUKI, Yoichi KAGEYAMA, Chikako ISHIZAWA, Makoto NISHIDA
Pages
15C-3-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
Road signs provide important information to guide and regulate the behavior of drivers and pedestrians to make
their journeys safer and more comfortable. In this study, we propose a method to recognize the speed-limit signs using pattern
matching for night scene videos.
View full abstract
-
Rei NARITA, Toru OGAWA, Yusuke MATSUI, Toshihiko YAMASAKI, Kiyoharu AI ...
Pages
15C-4-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
We present a method to
extract feature vector
s
from Manga image
s
by CNN
for sketch
-
based Manga retrieval
.
We
show
ed that fine
-
tuning
Ale
xNet
pre
-
t
rained on ImageNet
using
a large amount of
Manga face image
s
from Manga109
improves
accuracy of Manga character retrieval
.
View full abstract
-
Yuguan XING, Sosuke AMANO, Toshihiko YAMASAKI, Kiyoharu AIZAWA
Pages
15C-5-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
This research proposes a method for generating distributed representations, or vectors, of food names in
food recording systems, which loyally represent the characteristics of the dishes that the names are referring to.
These vectors can be used to realize a flexible and expandable associative search system in such services.
View full abstract
-
Michihiro MIZUNO, Akito TAKEKI, Shota HORIGUCHI, Toshihiko YAMASAKI, K ...
Pages
15C-6-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
We present a new method of novelty detection in image recognition based on convolutional neural network (CNN).
We use Sigmoid Layer as the last layer of a CNN instead of Softmax Layer. As a result, we discovered that a CNN with Sigmoid
Layer can detect novelties in an easy dataset better than that with Softmax Layer, but worse in a difficult dataset.
View full abstract
-
Pages
22B-0-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
-
Noriyoshi NAKAMURA, Yuta MURAKI, Koji NISHIO, Ken-ichi KOBORI
Pages
22B-1-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
W
e
propose a method of point cloud
thinning
for extracting
core
of
reinforcing steel r
ods from point cloud data.
The method thins point cloud data using
a
density function
field and
extract
the core lines
.
An experiment result shows that the
method is effective.
View full abstract
-
Takuya KATO, Yoshitsugu MANABE, Noriko YATA
Pages
22B-2-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
It is difficult
for a perso
n to coord
inate the connection between
CG motions and takes time even if
we use mo
tion capture systems.
this paper proposes a method of automa
tic generation of the
c
onnection
between
CG motions
for improvement
of the work efficiency and
generation of
more realistic CG motions
.
View full abstract
-
Masanori OGAWA, Toshihiko YAMASAKI, Kiyoharu AIZAWA
Pages
22B-3-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
This paper aims to create frame-sampled and stabilized omnidirectional video. We propose a method which
considers camera motion between omnidirectional frames. The results show that our method can select suitable frames for
stabilization in omnidirectional video.
View full abstract
-
Takafumi KATSUNUMA, Keita HIRAI, Takahiko HORIUCHI
Pages
22B-4-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
Projection mapping is a useful technique to control material perception. In this study, we propose a method to
control texture appearance by a frequency-based projection mapping. We conducted subjective evaluation experiments to
validate the feasibility of our method. The experimental results show out method is effective for texture appearance control.
View full abstract
-
Junya MATSUKI, Yoshitsugu MANABE, Noriko YATA, Kenji NOMURA, Hirohito ...
Pages
22B-5-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
This paper proposes three-dimensional measurement system for foot using RGB-D cameras. We get threedimensional
point cloud data using four RGB-D cameras and align these data to get three-dimensional measurement data. We
aim at estimate best fit class between shoes and feet data using each three-dimensional features.
View full abstract
-
Yuta HOSHI, Yutaka KANEKO, Yoshihiko KAWAI, Michihiro UEHARA
Pages
22B-6-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
We are considering a new style of watching television with a communication robot in the living room.
In this paper,
we propose a detection method of TV position combining
frame
subtraction and edge
detection from camera images of the robot.
Experimental results in an environment simulating a living ro
om show the effectiveness of our
proposed method.
View full abstract
-
Pages
22C-0-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
-
Katsuya Muro, Yoshiaki Shishikui, Yasuhito Sawaharta
Pages
22C-1-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
We conducted subjective evaluation experiments to clarify an appropriate presentation in terms of display size.
The results show that viewers have str
onger impressions of the object
s when they are shown on the display as their actual size.
View full abstract
-
Masaaki HARADA, Yoshiaki SHISHIKUI, Yasuhito SAWAHATA
Pages
22C-2-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
W
e conducted subjective tests to clarify a relation
ship
between the
characteristics of the display brightness
and
impression
s
. We found that the display brightness had an effect to enhance some impressions.
View full abstract
-
Ryo Harauchi, Yoshiaki Shishikui, Yasuhito Sawahata
Pages
22C-3-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
We propose a
n
image
processing
mode
l to enhance impressions of
image
s
. We conducted a preliminary experiment to design
model parameters. We then confirmed the model works through
the
subj
ective evaluation experiment on the
processed images.
View full abstract
-
Kohei Odagiri, Yoshiaki Shishikui, Yasuhito Sawahata
Pages
22C-4-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
We conducted subjective tests to clarify a relation between Wide-Color-Gamut(WCG) and impression. We found
that WCG images had an effect to enhance some impressions.
View full abstract
-
Hirohumi MORI, Akihito KIDO, Hatsuo YAMASAKI, Muneo YAMADA, Tomoaki NA ...
Pages
22C-5-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
We have been evaluating the driving behavior of elderly drivers by the TTC(Time To collision). However,
evaluation was that differ with the insp
ection decision. In the study,
w
e developed a new evaluation method for obtaining the
same result as the in
spection decision automatically.
View full abstract
-
Yuki NAGATANI, Keiya KIMOTO, Hatsuo YAMASAKI, Muneo YAMADA, Tomoaki NA ...
Pages
22C-6-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
While we operate a veh
icle, divided attention is one
of
an
important
factor.
Using a driving simulator,
w
e
study about
an
estimation of d
river
’
s
divided attention.
View full abstract
-
Shintaro MIYABE, Takaya SUGIURA, Muneo YAMADA, Tomoaki NAKANO
Pages
22C-7-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
Conventional writing-type executive functional tests are so complex that elderly people can not easily undergo
the tests. Thus we made an automated evaluation system of executive function. In this report, introduces the system and
verifies its usefulness.
View full abstract
-
Pages
23A-0-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
-
Yusuke Ochi, Yoshihiro FUJITA
Pages
23A-1-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
We have
conducted an
experiment
al
shoot
ing of a
soccer
ball with
HDR
(High Dynamic Range ) imaging method to
improve accuracy of extracting moving objects
for locus analysis .
The results showed
effective
ness of
HD
R imaging
method.
View full abstract
-
Kazunori TAKAHASHI, Masahito HASHIZUME
Pages
23A-2-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
This paper describes development and practical use of a very small sized camera which can
be attached to sports referee and capture “the referee's eye” for live broadcasting. The camera has
HD-SDI(1080/59.94i) output at light weight (40g),very small size (45x26x26mm) and has been in
practical use for live sports broadcast from 2015.By this development, the viewers came to be able to
enjoy a reality picture of the referee's glance.
View full abstract
-
Takahiro YAMASAKI, Toshio YASUE, Kei OGURA, Kodai KIKUCHI, Takeshi KAJ ...
Pages
23A-3-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
An 8K slow-motion system is strongly desired for 8K sports content production. We have developed a 240-fps test shooting system using full-featured 8K equipment and implemented a slow motion shooting experiment of sport contents.
View full abstract
-
Yohei MURAKAMI, Akira Uemura, Fuyuo TAKAGI
Pages
23A-4-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
We developed
uncompressed 4K player and recorder
with th
e goal of
4K broadcasting
on BS channels
in
2018
.The device we developed is composed
of four uncompressed HD
players and
recorder
s
with SSD
s
. The cost of
storage
media
is less than
10 percent of
e
xisting devices because of using universal SSDs.
View full abstract
-
Pages
23B-0-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
-
Hidetaka MIYAJIMA, Noriko YATA, Yoshitsugu MANABE
Pages
23B-1-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
This paper proposes a
n AR
sightseeing
s
ystem
suitable for a sightseeing spot
with proximity detection and
markerless
AR
. The system
uses the beacon for the
proximity detection
of the
user
and AKAZE features for the
markerless AR
.
Using these
techniques realize the h
ighly precise system
which do
es not spoil a scene
.
View full abstract
-
Naoki SHINOZUKA, Yoshitsugu MANABE, Noriko YATA
Pages
23B-2-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
In the
displaying
virtual objects
through
the
glass
such as
a
show window
,
i
t seems to exist in front of the
glass
because of the
reflect
ion of the
ambience.
T
his paper proposes
extracting
reflections using stereo camera
and
displayi
ng
as behind
on the basis of the
reflections
.
View full abstract
-
Koki NIKAIDO, Yoshitsugu MANABE, Noriko YATA
Pages
23B-3-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
An invisible AR system
using polarized light doe
sn't spoil object’s appearance.
However, there are still issues in
invisibility an
d recognition
accuracy
of AR ma
rker.
This paper propo
ses a
solution of these issues and reports the improvement
of an invisible AR system using pol
arized light.
View full abstract
-
Kazushige SHIMADA, Yoshitsugu MANABE, Noriko YATA, Takuzi SUZUKI
Pages
23B-4-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
This paper proposes an
interactive AR exhibitions using
a
pr
ojection mapping
for
exhibition objects which have
textures and colors
.
T
extures and colors
of the exhibition objects
oft
en disturb projection mapping.
We
try to s
olve this problem
with
projector
-
camera system.
View full abstract
-
Hirofumi MORIOKA, Yuko YAMANOUCHI, Hideki MITSUMINE
Pages
23B-5-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
Our novel studio robot helps to improve TV program production featuring live performers and CG counterparts. It has a light sensor and a motion sensor enabling more integrated video compositing with lively interaction between the performers and CG characters.
View full abstract
-
Thiwat Rongsirigul, Yuta NAKASHIMA, Tomokazu SATO, Naokazu YOKOYA
Pages
23B-6-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
We accelerate a view-dependent texture mapping process to generate novel view images for stereoscopic
HMD from captured images. By using the similarity between a stereoscopic pair of HMD images we are able to improve the
performance of the VDTM process.
View full abstract
-
Pages
24A-0-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
-
Koki IKEDA, Yoshitugu MANABE, Noriko YATA
Pages
24A-1-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
It is important to know about surrounding objec
ts and beware steps
when
wheelchair users move around.
However
it is diff
icult to get these information
because of
the lower eye's height and restrict physic
al movements than normal
people
.
This paper proposes a method of providing surround information
usi
ng 3D information getting from wheelchair users
above.
View full abstract
-
Yousuke KAMATA, Hironobu SATO, Kiyohiko ABE
Pages
24A-2-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
If an input method for physical disabilities uses eye-blink as input decision, it requires classification of eye-blink types. In this research, we were corroborated significant difference in duration time for each type of eye-blink. Based on this, we developed auto identification method for voluntary eye-blink. In this method, it calibrate on every examinee. We report auto identification method and the experiment of input system using conscious eye-blink.
View full abstract
-
Satoshi NOMURA, Yoshinobu TONOMURA
Pages
24A-3-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
This report propose
a message board “Resonant Bits“,
which shows input message draw
n
on it by the
user
selectively b
ase
d on
frequencies
of
the voice as if the message
resonantly
emerges responding to the us
er’s
voice.
View full abstract
-
Shinya TODA, Yoshinobu TONOMURA
Pages
24A-4-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
This paper proposes
an interactive system
“Sound of Surrounds”
, which you can virtually place any sound in a
real scene around you and instantly replay it by directing th
e position associated by the cue
in the scene.
View full abstract
-
Daichi SUNOUCHI, Kiyoshi NOSU
Pages
24A-6-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
The research analyzed the dialogue behaviors of movie appreciator pairs. The dialogue scene of the
subject pair was shot from 1st, 2nd and 3rd
-
person views by video cameras. The recorded voice was
converted to text data. The analyzed data of different pairs was compared to extract behavior features.
View full abstract
-
Pages
24B-0-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
-
Eisaku HIGUCHI, Eizaburo IWATA, Makoto HASEGAWA
Pages
24B-1-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
A stereoscopic vision method using
a
single still picture
is proposed; the method is based on object background
and foreground estimation.
When two
object
s are
overlap
each other
,
the
foreground
object hides
the background
object
.
We
compute the c
onvex hull
of the each object, and w
e estimate which object is foreground or background in the image. After the
foreground or background
estimation, stereoscopic vision is performed using the
anaglyph method
. The experimental results show
that the method is realized using a real image.
View full abstract
-
Hin YOU, Hideki KAKEYA
Pages
24B-2-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
A
n e
ndoscopic surgery
training
system using a
full HD glass
-
less stereosco
pic display based on time
-
division
multiplexing parallax barrier is realized.
By using a stereoscopic camera, stereosc
opic live action movie is
displayed with our
system.
The result of the experiment shows that the operators can work faster under the 3D co
ndition.
View full abstract
-
Hayato TAKAHASHI, Hideki KAKEYA
Pages
24B-3-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
A full HD aerial stereoscopic display is realized by combining a convex lens and time-division parallax barrier.
Undistorted stereoscopic images are presented by the parallax barrier system with correction of image distortion.
View full abstract
-
Yan Dong, Hideki Kakeya
Pages
24B-4-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
A
thin and wide HUD is attainable by use of
a
n
autostereoscopic
display
. Crosstalk of left
-
eye
image and right
-
eye image, however,
can cause
serious
trouble in practice.
In this paper w
e evaluate the
acceptable le
vel of crosstalk for 3D HUDs.
View full abstract
-
Shimpei SHINOHARA, Yasuhiro TAKAKI
Pages
24B-5-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
We have developed the super multi
-
view head
-
up display (SMV
-
HUD) for
automobiles
.
In this paper, we evaluate
d
the temporal characteristics of the
depth perception
for
the three
-
dimensional images produced by
the SMV
-
HUD
. The
accuracy
of the depth perception
was
measured for
different
observation time
, such as, 0.5, 1.0, and 2.0 s
.
View full abstract