-
Article type: Cover
Pages
Cover1-
Published: August 30, 2012
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Article type: Index
Pages
Toc1-
Published: August 30, 2012
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Masayuki TANIMOTO, Kazuyoshi SUZUKI
Article type: Article
Session ID: AIT2012-101
Published: August 30, 2012
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
FTV (Free-viewpoint TV) that enables to view a 3D world by freely changing the viewpoint is the ultimate 3DTV and ranked as the top of visual media in the world. MPEG has been promoting the international standardization of FTV. The first phase of FTV is MVC (Multi-view Video Coding) and the second phase of FTV is 3DV (3D Video). MVC completed in 2009 enables the efficient coding of multiple camera views and has been adopted by Blu-ray 3D. 3DV is a standard that targets serving a variety of 3D displays and is currently in progress. In 3DV, compression and 3D format are discussed for standardization. Here, global view depth format is presented.
View full abstract
-
Hiroki NAKAI, Tatsuhide MATSUDA, Akinobu MAEJIMA, Shigeo MORISHIMA
Article type: Article
Session ID: AIT2012-102
Published: August 30, 2012
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper, aging face composition method based on frequency analysis is proposed. First, in the aging texture, we use two-dimensional Fourier transform for the flat area from the face image in the database, and determine the correlation coefficient between the age and the frequency components in frequency space. Next, we convert to a target age about frequency component of high correlation coefficient, and get a target texture by inverse Fourier transformation. In the proposed method, we can generate aging face to maintain personal skin.
View full abstract
-
Takashi SATOU, Mitsuhiro GOTO, Taiga YOSHIDA, Suguru HIGASHINO
Article type: Article
Session ID: AIT2012-103
Published: August 30, 2012
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We developed a programming environment Python/Vk for efficient research of video handling technologies. Python/Vk provides various video event detectors such as shot-boundary detector and camera motion estimator. We can easily develop applications using Python/Vk that enables to decode frame images and audio data from encoded video files, to make indexes for structuring video and content-based retrieval, and to edit summaries of the video. Combination with open source libraries of Python such as OpenCV, NumPy and SciPy gives more advanced functionalities to the applications. We describe the architecture of Python/Vk, the productivity comparing with C++, and example applications.
View full abstract
-
Yoshinobu TONOMURA
Article type: Article
Session ID: AIT2012-104
Published: August 30, 2012
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In this report, my personal opinion how recent information environment affects the young is presented. It is important especially for the young, who are under development toward mature human, to reconsider how they act using information devices. Some points of action are also stated to counteract the problems.
View full abstract
-
Naoki KOBAYASHI, Michiie SAKAMOTO, Akinori HASHIGUCHI, Masahiro YAMAGU ...
Article type: Article
Session ID: AIT2012-105
Published: August 30, 2012
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Pathology is an important part in medical diagnoses. Virtual slides technology which stores or displays whole digital pathological image advances rapidly and is expected to be applied to clinical use. A study of "a comprehensive research and development of basic recognition technology for pathological imaging (histological image recognition for quantitative pathological diagnosis) is introduced in this paper. The diagnosis of cancer needs quantitative indices of tumor differentiation and malignancy of carcinoma. We show roles of image processing for early stage pathological diagnosis using high quality histological image.
View full abstract
-
Takafumi MARUTANI, Toshiaki FUJII, Kenji MASE, Tetsuya KAWAMOTO
Article type: Article
Session ID: AIT2012-106
Published: August 30, 2012
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We have developed a novel networked multi-view video contents viewing system with a video streaming technology. The system incorporates a target-centered viewing interface to enhance the multi-view video contents viewing experiences and exploits Microsoft's Smooth Streaming to enable a synchronized hierarchically-encoded streaming. Microsoft's Smooth Streaming Media Element (SSME) players are evoked on a web client machine to download adequate stream files according to the given performance of the network and the client. We apply our developing system to manage multi-view video contents in wrestling game in order to check our system's performance.
View full abstract
-
Tomoya KOSAKA, Jyouji NAKAYAMA, Shinji FUKATSU, Akihito AKUTSU, Kenich ...
Article type: Article
Session ID: AIT2012-107
Published: August 30, 2012
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
The increase and diversification of information on the Internet cause us not to be able to deal with information overload by using usual information filtering methods. In such a situation, curation, the collection and sharing of information with an individual perspective, attracts attention as a way to reduce this problem. In this paper, we introduce a new information retrieval system which provides a curation service. We propose a definition of perspective, a method of extracting individual perspectives and a method of extracting representative individual perspectives using in this system, and we explain results that we apply this definition and these methods to actual blog articles as an example.
View full abstract
-
Naoyoshi YAMAMOTO, Yoshinobu TONOMURA
Article type: Article
Session ID: AIT2012-108
Published: August 30, 2012
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This report proposes "Utteractive Wall" (Utter+InteractiveWall) which attracts people and stirs up them to join what happens there, what we call, ambient space. Utteractive Wall catches the eye of a passerby with floating bubbles shown on its large-and-wide display, and when the person approaches the wall and touches one of the bubbles, a recorded someone's voice is replayed. Prompted person can record his/her voice which becomes one of the bubbles. This repetition enhances the ambient space to be more attractive. The system comprises seven units, each of which has a PC, a large vertical-type display with embedded touch sensors, a webcam, and a microphone. The proposed system could be a platform of future interactive applications on ambient space.
View full abstract
-
Article type: Appendix
Pages
App1-
Published: August 30, 2012
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Article type: Appendix
Pages
App2-
Published: August 30, 2012
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Article type: Appendix
Pages
App3-
Published: August 30, 2012
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS