-
p.
15B-0-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
-
宮川 和典, 菊池 幸大, 安江 俊夫, 島本 洋, 持塚 多久男
p.
15B-1-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
To achieve an active optical filter for television cameras, we created a prototype of a metal salt
precipitation-type electrochromic dimming element. We found that the transmittance could be reversibly varied in
response to an applied voltage, and that flat optical transparency could be obtained across the entire visible light
region.
抄録全体を表示
-
小池 中
p.
15B-2-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
This platform is made up of storage virtualization technology, contributing to the improvement of News & Sports
workflow efficiency. This time, I will detail the devising and practical application of this platform.
抄録全体を表示
-
池尾 誠哉, 松村 欣司, 藤沢 寛, 武智 秀
p.
15B-3-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
Since Hybridcast Service
s
has started in
September
2013
,
b
roadcasters and
a
pp
lication
d
evelopers have
straggled
to
develop
H
ybridcast applications and
overcome
the differences of the
execution
performance
of
each
browser
for
various H
ybridcast applications
on
each TV receiver
.
In order to
help that
development,
we
have
developed
some prototype
Hybridcast
benchmark
test
applications
to evaluate Hybridcast HTML5 browser performance on each
receiver
.
This pap
er
describes
the
approach
to design the Hybridcast
benc
hmark
test
applications
.
抄録全体を表示
-
p.
15C-0-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
-
木之下 滉大郎, 矢田 紀子, 眞鍋 佳嗣
p.
15C-1-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
When we get Convol
utional Neural Network by transfer learning
,
this network has some filter that
raise
incorrect
recognition rate
.
T
his paper
propose
s
a
method to select optimal filter
and remove unnecessary
filter
.
抄録全体を表示
-
平林 謙太郎, 中村 友昭, 金子 正秀
p.
15C-2-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
This paper propose
s
a robust automatic extraction method of mouth
area
from facial image
by implementing the
Convolutional Neural Network (CNN)
-
based
classification of each pixel.
This
will help
to
draw
expressive portrait automati
cally
over wide
-
ranging facial images.
I
t is possible to expand
the
variety
of communication.
抄録全体を表示
-
鈴木 和人, 景山 陽一, 石沢 千佳子, 西田 眞
p.
15C-3-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
Road signs provide important information to guide and regulate the behavior of drivers and pedestrians to make
their journeys safer and more comfortable. In this study, we propose a method to recognize the speed-limit signs using pattern
matching for night scene videos.
抄録全体を表示
-
成田 嶺, 小川 徹, 松井 勇佑, 山崎 俊彦, 相澤 清晴
p.
15C-4-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
We present a method to
extract feature vector
s
from Manga image
s
by CNN
for sketch
-
based Manga retrieval
.
We
show
ed that fine
-
tuning
Ale
xNet
pre
-
t
rained on ImageNet
using
a large amount of
Manga face image
s
from Manga109
improves
accuracy of Manga character retrieval
.
抄録全体を表示
-
Yuguan XING, Sosuke AMANO, Toshihiko YAMASAKI, Kiyoharu AIZAWA
p.
15C-5-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
This research proposes a method for generating distributed representations, or vectors, of food names in
food recording systems, which loyally represent the characteristics of the dishes that the names are referring to.
These vectors can be used to realize a flexible and expandable associative search system in such services.
抄録全体を表示
-
水野 倫宏, 竹木 章人, 堀口 翔太, 山﨑 俊彦, 相澤 清晴
p.
15C-6-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
We present a new method of novelty detection in image recognition based on convolutional neural network (CNN).
We use Sigmoid Layer as the last layer of a CNN instead of Softmax Layer. As a result, we discovered that a CNN with Sigmoid
Layer can detect novelties in an easy dataset better than that with Softmax Layer, but worse in a difficult dataset.
抄録全体を表示
-
p.
22B-0-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
-
中村 憲嘉, 村木 祐太, 西尾 孝治, 小堀 研一
p.
22B-1-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
W
e
propose a method of point cloud
thinning
for extracting
core
of
reinforcing steel r
ods from point cloud data.
The method thins point cloud data using
a
density function
field and
extract
the core lines
.
An experiment result shows that the
method is effective.
抄録全体を表示
-
加藤 拓哉, 眞鍋 佳嗣, 矢田 紀子
p.
22B-2-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
It is difficult
for a perso
n to coord
inate the connection between
CG motions and takes time even if
we use mo
tion capture systems.
this paper proposes a method of automa
tic generation of the
c
onnection
between
CG motions
for improvement
of the work efficiency and
generation of
more realistic CG motions
.
抄録全体を表示
-
小川 将範, 山崎 俊彦, 相澤 清晴
p.
22B-3-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
This paper aims to create frame-sampled and stabilized omnidirectional video. We propose a method which
considers camera motion between omnidirectional frames. The results show that our method can select suitable frames for
stabilization in omnidirectional video.
抄録全体を表示
-
勝沼 貴文, 平井 経太, 堀内 隆彦
p.
22B-4-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
Projection mapping is a useful technique to control material perception. In this study, we propose a method to
control texture appearance by a frequency-based projection mapping. We conducted subjective evaluation experiments to
validate the feasibility of our method. The experimental results show out method is effective for texture appearance control.
抄録全体を表示
-
松木 純也, 眞鍋 佳嗣, 矢田 紀子, 野村 憲司, 廣橋 博仁
p.
22B-5-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
This paper proposes three-dimensional measurement system for foot using RGB-D cameras. We get threedimensional
point cloud data using four RGB-D cameras and align these data to get three-dimensional measurement data. We
aim at estimate best fit class between shoes and feet data using each three-dimensional features.
抄録全体を表示
-
星 祐太, 金子 豊, 河合 吉彦, 上原 道宏
p.
22B-6-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
We are considering a new style of watching television with a communication robot in the living room.
In this paper,
we propose a detection method of TV position combining
frame
subtraction and edge
detection from camera images of the robot.
Experimental results in an environment simulating a living ro
om show the effectiveness of our
proposed method.
抄録全体を表示
-
p.
22C-0-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
-
室 克弥, 鹿喰 善明, 澤畠 康仁
p.
22C-1-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
We conducted subjective evaluation experiments to clarify an appropriate presentation in terms of display size.
The results show that viewers have str
onger impressions of the object
s when they are shown on the display as their actual size.
抄録全体を表示
-
原田 真彰, 鹿喰 善明, 澤畠 康仁
p.
22C-2-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
W
e conducted subjective tests to clarify a relation
ship
between the
characteristics of the display brightness
and
impression
s
. We found that the display brightness had an effect to enhance some impressions.
抄録全体を表示
-
原内 瞭, 鹿喰 善明, 澤畠 康仁
p.
22C-3-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
We propose a
n
image
processing
mode
l to enhance impressions of
image
s
. We conducted a preliminary experiment to design
model parameters. We then confirmed the model works through
the
subj
ective evaluation experiment on the
processed images.
抄録全体を表示
-
小田桐 航平, 鹿喰 善明, 澤畠 康仁
p.
22C-4-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
We conducted subjective tests to clarify a relation between Wide-Color-Gamut(WCG) and impression. We found
that WCG images had an effect to enhance some impressions.
抄録全体を表示
-
森 裕文, 木戸 章仁, 山﨑 初夫, 山田 宗男, 中野 倫明
p.
22C-5-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
We have been evaluating the driving behavior of elderly drivers by the TTC(Time To collision). However,
evaluation was that differ with the insp
ection decision. In the study,
w
e developed a new evaluation method for obtaining the
same result as the in
spection decision automatically.
抄録全体を表示
-
永谷 優樹, 木本 圭也, 山崎 初夫, 山田 宗男, 中野 倫明
p.
22C-6-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
While we operate a veh
icle, divided attention is one
of
an
important
factor.
Using a driving simulator,
w
e
study about
an
estimation of d
river
’
s
divided attention.
抄録全体を表示
-
宮部 真太朗, 杉浦 崇也, 山田 宗男, 中野 倫明
p.
22C-7-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
Conventional writing-type executive functional tests are so complex that elderly people can not easily undergo
the tests. Thus we made an automated evaluation system of executive function. In this report, introduces the system and
verifies its usefulness.
抄録全体を表示
-
p.
23A-0-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
-
越智 雄亮, 藤田 欣裕
p.
23A-1-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
We have
conducted an
experiment
al
shoot
ing of a
soccer
ball with
HDR
(High Dynamic Range ) imaging method to
improve accuracy of extracting moving objects
for locus analysis .
The results showed
effective
ness of
HD
R imaging
method.
抄録全体を表示
-
高橋 一徳, 橋詰 聖仁
p.
23A-2-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
This paper describes development and practical use of a very small sized camera which can
be attached to sports referee and capture “the referee's eye” for live broadcasting. The camera has
HD-SDI(1080/59.94i) output at light weight (40g),very small size (45x26x26mm) and has been in
practical use for live sports broadcast from 2015.By this development, the viewers came to be able to
enjoy a reality picture of the referee's glance.
抄録全体を表示
-
山﨑 貴弘, 安江 俊夫, 小倉 渓, 菊地 幸大, 梶山 岳士, 宮下 英一, 島本 洋
p.
23A-3-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
An 8K slow-motion system is strongly desired for 8K sports content production. We have developed a 240-fps test shooting system using full-featured 8K equipment and implemented a slow motion shooting experiment of sport contents.
抄録全体を表示
-
村上 洋平, 上村 明, 高木 冬夫
p.
23A-4-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
We developed
uncompressed 4K player and recorder
with th
e goal of
4K broadcasting
on BS channels
in
2018
.The device we developed is composed
of four uncompressed HD
players and
recorder
s
with SSD
s
. The cost of
storage
media
is less than
10 percent of
e
xisting devices because of using universal SSDs.
抄録全体を表示
-
p.
23B-0-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
-
宮島 秀昂, 矢田 紀子, 眞鍋 佳嗣
p.
23B-1-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
This paper proposes a
n AR
sightseeing
s
ystem
suitable for a sightseeing spot
with proximity detection and
markerless
AR
. The system
uses the beacon for the
proximity detection
of the
user
and AKAZE features for the
markerless AR
.
Using these
techniques realize the h
ighly precise system
which do
es not spoil a scene
.
抄録全体を表示
-
篠塚 直希, 眞鍋 佳嗣, 矢田 紀子
p.
23B-2-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
In the
displaying
virtual objects
through
the
glass
such as
a
show window
,
i
t seems to exist in front of the
glass
because of the
reflect
ion of the
ambience.
T
his paper proposes
extracting
reflections using stereo camera
and
displayi
ng
as behind
on the basis of the
reflections
.
抄録全体を表示
-
二階堂 光希, 眞鍋 佳嗣, 矢田 紀子
p.
23B-3-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
An invisible AR system
using polarized light doe
sn't spoil object’s appearance.
However, there are still issues in
invisibility an
d recognition
accuracy
of AR ma
rker.
This paper propo
ses a
solution of these issues and reports the improvement
of an invisible AR system using pol
arized light.
抄録全体を表示
-
島田 一成, 眞鍋 佳嗣, 矢田 紀子, 鈴木 卓治
p.
23B-4-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
This paper proposes an
interactive AR exhibitions using
a
pr
ojection mapping
for
exhibition objects which have
textures and colors
.
T
extures and colors
of the exhibition objects
oft
en disturb projection mapping.
We
try to s
olve this problem
with
projector
-
camera system.
抄録全体を表示
-
盛岡 寛史, 山内 結子, 三ッ峰 秀樹
p.
23B-5-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
Our novel studio robot helps to improve TV program production featuring live performers and CG counterparts. It has a light sensor and a motion sensor enabling more integrated video compositing with lively interaction between the performers and CG characters.
抄録全体を表示
-
Thiwat Rongsirigul, 中島 悠大, 佐藤 智和, 横矢 直和
p.
23B-6-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
We accelerate a view-dependent texture mapping process to generate novel view images for stereoscopic
HMD from captured images. By using the similarity between a stereoscopic pair of HMD images we are able to improve the
performance of the VDTM process.
抄録全体を表示
-
p.
24A-0-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
-
池田 光希, 眞鍋 佳嗣, 矢田 紀子
p.
24A-1-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
It is important to know about surrounding objec
ts and beware steps
when
wheelchair users move around.
However
it is diff
icult to get these information
because of
the lower eye's height and restrict physic
al movements than normal
people
.
This paper proposes a method of providing surround information
usi
ng 3D information getting from wheelchair users
above.
抄録全体を表示
-
鎌田 暢介, 佐藤 寛修, 阿部 清彦
p.
24A-2-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
If an input method for physical disabilities uses eye-blink as input decision, it requires classification of eye-blink types. In this research, we were corroborated significant difference in duration time for each type of eye-blink. Based on this, we developed auto identification method for voluntary eye-blink. In this method, it calibrate on every examinee. We report auto identification method and the experiment of input system using conscious eye-blink.
抄録全体を表示
-
野村 聡史, 外村 佳伸
p.
24A-3-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
This report propose
a message board “Resonant Bits“,
which shows input message draw
n
on it by the
user
selectively b
ase
d on
frequencies
of
the voice as if the message
resonantly
emerges responding to the us
er’s
voice.
抄録全体を表示
-
戸田 慎也, 外村 佳伸
p.
24A-4-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
This paper proposes
an interactive system
“Sound of Surrounds”
, which you can virtually place any sound in a
real scene around you and instantly replay it by directing th
e position associated by the cue
in the scene.
抄録全体を表示
-
須之内 大地, 野須 潔
p.
24A-6-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
The research analyzed the dialogue behaviors of movie appreciator pairs. The dialogue scene of the
subject pair was shot from 1st, 2nd and 3rd
-
person views by video cameras. The recorded voice was
converted to text data. The analyzed data of different pairs was compared to extract behavior features.
抄録全体を表示
-
p.
24B-0-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
-
樋口 栄作, 岩田 英三郎, 長谷川 誠
p.
24B-1-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
A stereoscopic vision method using
a
single still picture
is proposed; the method is based on object background
and foreground estimation.
When two
object
s are
overlap
each other
,
the
foreground
object hides
the background
object
.
We
compute the c
onvex hull
of the each object, and w
e estimate which object is foreground or background in the image. After the
foreground or background
estimation, stereoscopic vision is performed using the
anaglyph method
. The experimental results show
that the method is realized using a real image.
抄録全体を表示
-
楊 斌, 掛谷 英紀
p.
24B-2-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
A
n e
ndoscopic surgery
training
system using a
full HD glass
-
less stereosco
pic display based on time
-
division
multiplexing parallax barrier is realized.
By using a stereoscopic camera, stereosc
opic live action movie is
displayed with our
system.
The result of the experiment shows that the operators can work faster under the 3D co
ndition.
抄録全体を表示
-
高橋 勇人, 掛谷 英紀
p.
24B-3-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
A full HD aerial stereoscopic display is realized by combining a convex lens and time-division parallax barrier.
Undistorted stereoscopic images are presented by the parallax barrier system with correction of image distortion.
抄録全体を表示
-
厳 棟, 掛谷 英紀
p.
24B-4-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
A
thin and wide HUD is attainable by use of
a
n
autostereoscopic
display
. Crosstalk of left
-
eye
image and right
-
eye image, however,
can cause
serious
trouble in practice.
In this paper w
e evaluate the
acceptable le
vel of crosstalk for 3D HUDs.
抄録全体を表示
-
篠原 慎平, 高木 康博
p.
24B-5-
発行日: 2016年
公開日: 2020/01/23
会議録・要旨集
オープンアクセス
We have developed the super multi
-
view head
-
up display (SMV
-
HUD) for
automobiles
.
In this paper, we evaluate
d
the temporal characteristics of the
depth perception
for
the three
-
dimensional images produced by
the SMV
-
HUD
. The
accuracy
of the depth perception
was
measured for
different
observation time
, such as, 0.5, 1.0, and 2.0 s
.
抄録全体を表示