-
Mai KANAI, Yoshitsugu MANABE, Noriko YATA
Pages
24B-6-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
Concave or
tubular
object
s
are
difficult to measure the
inside
shape because of
limit
ed
angle of view
and
low
density
of projected
patterns
.
This paper proposes a me
t
hod
of 3D measurement for inside surface using fish
-
eye camera
and
concentric circle patterns.
View full abstract
-
Pages
25A-0-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
-
Shohei SATO, Tsuyoshi TAKAHASHI, Yoichi KAGEYAMA, Makoto NISHIDA
Pages
25A-1-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
A command identification system that uses lip motion features is practical because the user can operate it
without touching any devices. To develop a useful command input system, we propose a method considered variance of feature
points on lip motion in utterance.
View full abstract
-
Hiroki ISHIBASHI, Tsuyoshi TAKAHASHI, Yoichi KAGEYAMA, Masaki ISHII, M ...
Pages
25A-2-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
Lip motion change is useful to detect the occurrence of amusement feeling. However, it has not
investigated the relationship between physical conditions and lip motion change yet. Therefore, we acquired
lip motion data for two months, and investigated the relationship.
View full abstract
-
Saaya URAKABE, Ryota KURAMOCHI, Miyuki SUGANUMA, Shinya MOCHIZUKI, Mit ...
Pages
25A-3-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
In recent years, the paralanguage that can check people's physical condition and emotion from unspoken
information is spotlighted.
We evaluated the relationship between physical condition and lip movement in this research.
View full abstract
-
Aoi TANAKA, Takeru MASHIO, Miyuki SUGANUMA, Shinya MOCHIZUKI, Mitsuho ...
Pages
25A-4-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
Distribution of gazing points
while viewing
pictures by changing viewing
distance
s
and viewing positions.
It was
a result of one person, but t
he gazing points when
viewing
a scenery image spread out, and a tendency to concentrate on the
face of the person
was seen when
viewing
a person image.
View full abstract
-
Takahiro FUKUSHIMA, Takashi YASUDA, Mika ODA
Pages
25A-5-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
We
report the results of evaluation experiments for closed captions of TV commercials using eye
-
tracking device. Both
hearing impaired and hearing people (92 in total) participated. The results show there are not big differences between ty
pes of closed
captions as
well as between the two groups, which suggest that both groups view the captions similarly.
View full abstract
-
Keisuke SUZUKI, Hironobu SATO, Kiyohiko ABE
Pages
25A-6-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
We measured the vergence eye movement by image analysis based on the limbus eye tracking method. We analyzed the images around the eye on gazing at the index putting in the depth. As a result, we confirmed that the vergence eye movement can be estimated by single video camera.
View full abstract
-
Tomoya KOHORIBATA, Hironobu SATO, Kiyohiko ABE
Pages
25A-7-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
Many eye-gaze input interfaces often use corneal reflection method. These interfaces require the special devices. To resolve this point, we measured eye movement by using the SIFT feature value found on the iris. We also distinguished five kinds of the eye movements in the case fitting the head movements.
View full abstract
-
Pages
25B-0-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
-
Masato MIURA, Naoto OKAICHI, Jun ARAI, Tomoyuki MISHINA
Pages
25B-1-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
We developed an integral imaging capture system with equivalent to 100k pixels by use of a camera array
consisting of 64 high-definition cameras and a lens array consisting of approximately 100,000 small lenses. In order to stitch
the captured elemental images with high accuracy, light rays are interpolated spatially and angularly. We conducted
experiments to capture, stitch and display three-dimensional images, and confirmed the validity of the system.
View full abstract
-
Shintaro ASO, Hidekazu KINJO, Nobuhiko FUNABASHI, Ken-ichi AOSHIMA, Da ...
Pages
25B-2-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
We have developed a magneto-optical spatial light modulator driven by spin transfer switching (spin-SLM). The
fabricated spin-SLM device has a 100 × 100 array of pixels with a 2-m pixel pitch. We have succeeded in its electrical
operation and observing two-dimensional output images using a magneto-optical microscope.
View full abstract
-
Daisuke KATO, Kenji MACHIDA, Tomoyuki MISHINA, Nobuhiko FUNABASHI, Hid ...
Pages
25B-3-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
We have proposed an ultra-fine magneto-optical spatial-light-modulator driven by spin-transfer-switching (spin-SLM) for realizing electronic holography with a wide viewing-zone angle. In order to study the feasibility as a display device, we fabricated an ultra-fine hologram made of magneto-optical thin films covered with a transparent electrode layer and evaluated reconstructed three-dimensional images from the hologram.
View full abstract
-
Makoto OKUI, Koki WAKUNAMI, Ryutaro OHI, Yasuyuki ICHIHASHI, Boaz Jess ...
Pages
25B-4-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
We discussed duplicating techniques of the hologram made by our wavefront printer. We also conducted a simple
and basic experiment.
View full abstract
-
Kousuke HIRAHATA, Tatuie TSUKIJI, Haruo Isono
Pages
25B-5-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
This paper investigated the discomfort such as VR sickness and the eye strain of Head Mounted Display (HMD).
As a result of our experiments, it was found that video display delay of HMD gave the discomfort an unduly large amount of
i
nfluence.
View full abstract
-
Tsubasa SASAKI, Hirohiko FUKAGAWA, Takahisa SHIMIZU, Yoshihide FUJISAK ...
Pages
25B-6-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
To fabricate inverted organic light-emitting diodes (inverted OLEDs) on plastic film substrate, we developed the low-temperature process of electron injection layer (EIL). We demonstrated the inverted OLED having a long lifetime by using inorganic-organic hybrid material in the EIL, which is formed at low temperature of 120°C.
View full abstract
-
Pages
25C-0-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
-
Takahiro ITAZURI, Tsukasa FUKUSATO, Shugo YAMAGUCHI, Shigeo MORISHIMA
Pages
25C-1-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
We propose a method for watching volleyball video effectively. We focus on rally scene because volleyball is a rally point system game, so we propose rally shot detection algorithm and court detection algorithm. By combining these techniques, we can successfully detect rally scenes and add event information to them automatically.
View full abstract
-
Tadahiro OYAMA
Pages
25C-2-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
Recently, researches have been carried out to extract scenes automatically from sports
video. In this research, I
try automatic estimation of scenes from rugby game, which has been rarely targeted so far. As a result of classification of t
hree
scenes, estimation was possible with about 60% accuracy.
View full abstract
-
Makoto Urakawa, Hiroshi Fujisawa
Pages
25C-3-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
The large volumes of the information are generated by the minute in especially big sports events such as Olympics.
The games are broadcasted as a live or a return program and also the results of games are on air. From the viewers’ point, this
causes a flood of information. This paper introduces a study about integrating game information and broadcasting schedule by
structuring athletes and games data based on Ontology.
View full abstract
-
Yuta KUME, Mie SATO
Pages
25C-4-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
This study
compares
three
modeling systems
that
differ
in presentation of
user
’s
hands
.
We examine
which
presentation of
user’s hands makes barehanded interaction with a virtual object easy
.
View full abstract
-
Keita MINAMIYAMA, Mie SATO, Miyoshi AYAMA
Pages
25C-5-
Published: 2016
Released on J-STAGE: January 23, 2020
CONFERENCE PROCEEDINGS
OPEN ACCESS
Impressions of high
-
grada
tion images have been actively studied.
Impressions are influenced by viewer’s gaze
areas.
In this study, we examine impressions of high
-
gradation images by focusing on viewer’s gaze areas.
View full abstract