-
Article type: Cover
Pages
Cover1-
Published: October 21, 2009
Released on J-STAGE: September 20, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Article type: Index
Pages
Toc1-
Published: October 21, 2009
Released on J-STAGE: September 20, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Hisayuki Sasaki, Masahiro Kawakita, Jun Arai, Makoto Okui, Fumio Okano ...
Article type: Article
Session ID: 3DIT2009-66/IDY2009-
Published: October 21, 2009
Released on J-STAGE: September 20, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We have been conducting research on an integral three-dimensional television using an extremely high-resolution video in dual-green (G1 and G2) pixel-offset method. In this paper, we proposed a pixel-offset error measurement method using a lens array. In the method the error of pixel-offset distance between G1 and G2 panels was expanded by the lens array and can be easily detected in high accuracy.
View full abstract
-
Jun ARAI, Masahiro KAWAKITA, Hisayuki SASAKI, Hitoshi HIURA, Masato MI ...
Article type: Article
Session ID: 3DIT2009-67/IDY2009-
Published: October 21, 2009
Released on J-STAGE: September 20, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Integral 3-D television using Super Hi-Vision video system (7,680 pixels in the horizontal direction and 4,320 pixels in the vertical direction) has been developed. A 3-D image is reconstructed by 400(H)x250(v) elemental images. Compared with the conventional system, the vertical and horizontal resolutions of 3-D images are both doubled. We confirm the viewing angle of 24 degrees.
View full abstract
-
Kentaro KOZAKO, Yasuhiro TAKAKI
Article type: Article
Session ID: 3DIT2009-68/IDY2009-
Published: October 21, 2009
Released on J-STAGE: September 20, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
When a frame rate is decreased or an image moves fast, jerky movement called "jerkiness" is perceived. A number of researches have been done to characterize jerkiness for 2D images. However, jerkiness for 3D images has not been studied. The aim of this research is to characterize jerkiness for a high-density directional display that has been developed as a natural 3D display. In this paper, we conducted subjective evaluations under static viewing condition. We found that jerkiness was more noticeable for a nearer 3D image, and jerkiness was less noticeable for a further 3D image. We showed that, when a velocity of 3D image was described by a visual angle velocity, detection threshold and tolerance threshold of jerkiness were determined by a visual angle velocity and a frame rate. This characteristic is equivalent to that which had already been reported for 2D images.
View full abstract
-
Yohei Shinozaki, Nichiyo Nago, Yasuhiro Takaki
Article type: Article
Session ID: 3DIT2009-69/IDY2009-
Published: October 21, 2009
Released on J-STAGE: September 20, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We have proposed a new super multi-view display system that enables to increase the number of viewpoints by combining a flat-panel three-dimensional display system and a multi-projection three-dimensional display system. We have demonstrated a super multi-view display having 256 viewpoints (SMV256.) In this study, we evaluated the crosstalk of viewpoints and the ray focusing property of the SMV256. A photosensitive area of a CCD is placed on the plane where viewpoints are generated to measure intensity distributions of viewpoints to evaluate the crosstalk of viewpoints. The focus of a schematic eye is changed around planes where rays are focused and produced images are measured in order to evaluate the property of ray focusing to points constituting a three-dimensional image.
View full abstract
-
Suguru NAKAGAWA, Toshihiko YAMASAKI, Kiyoharu AIZAWA
Article type: Article
Session ID: 3DIT2009-70/IDY2009-
Published: October 21, 2009
Released on J-STAGE: September 20, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We are developing a compression algorithm for displaying TVM (Time-Varying Mesh) on mobile terminals. We have achieved a very high compression rate by the combination of spatial and temporal compression. However, the motion was limited because the orientation of the body was assumed to be constant throughout the sequence. In this paper, we add rotation of the body using the hip motion.
View full abstract
-
Yuichi Iwadate
Article type: Article
Session ID: 3DIT2009-71/IDY2009-
Published: October 21, 2009
Released on J-STAGE: September 20, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Although video images are two-dimensional information, three-dimensional information in a real scene can be handled by using video images captured by multiple camera. I like to call such handling the multi-view video processing. Relating to the multi-view video processing, various researches are progressed widely from viewpoints of novel image expression technology, next generation image media and so on. In our laboratories, a multi-view video system for TV program production is developed, and a stereoscopic TV system is studied for future. This paper presents outlines of our research.
View full abstract
-
Naho INAMOTO, Masanori KAKIMOTO, Yuki TAKAYA, Akihito ENOMOTO, Hideo S ...
Article type: Article
Session ID: 3DIT2009-72/IDY2009-
Published: October 21, 2009
Released on J-STAGE: September 20, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper proposes a method of generating 3D sports contents using photographed images taken with several cameras. In order to generate contents for 3D viewing with conventional methods, users need to take an object with several calibrated cameras and obtain depth information of the object. Otherwise the users are forced to take an object with the same number of cameras as the viewpoints of the target stereoscopic display on the condition that the disparity of cameras is identical to that of the display. On the other hand, our proposed method generates images from viewpoints required for 3D viewing by interpolating the given relative camera positions and then converting the captured image into 3D view image. Since this method requires no strong camera calibration and absorbs individual display variability with software processing, it enables users to generate flexible 3D contents adaptable to multi 3D displays with high efficiency.
View full abstract
-
Shunsuke YOSHIDA, Sumio YANO, Hiroshi ANDO
Article type: Article
Session ID: 3DIT2009-73/IDY2009-
Published: October 21, 2009
Released on J-STAGE: September 20, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We propose a novel auto-stereoscopic 3D display of 360° optimized for tabletop tasks. The display employs a cylindrical- or a conical-shaped optical device that has a directional optical characteristic and circularly arranged projectors. Those devices are installed underneath the table not to obstruct the working space. One of the novelties of the proposed method is forming a ring-shaped viewing area above the table. When users look in this area, they can observe individual 3D images from each direction with correct perspective on the table such as a centerpiece.
View full abstract
-
Jaeyong LEE, Tetsuya MIYASHITA, Tatsuo UCHIDA
Article type: Article
Session ID: 3DIT2009-74/IDY2009-
Published: October 21, 2009
Released on J-STAGE: September 20, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Flat-Panel Multi-view 3D Displays have been developed actively in recent years. But these displays have some pseudoscopic regions that give an unpleasant feeling to viewers. We suggest a new method to eliminate the pseudoscopic regions based on the applying visual characteristics of binocular rivalry. We confirm the validity by the experiments and show that the displeasure in pseudoscopic regions can be decreased effectively.
View full abstract
-
Takuya INOUE, Yoshiyuki KOBAYASHI, Kei SADAKUNI, Hirotsugu YAMAMOTO, S ...
Article type: Article
Session ID: 3DIT2009-75/IDY2009-
Published: October 21, 2009
Released on J-STAGE: September 20, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We evaluate perceived depth change by blurring an image of a dominant eye on stereoscopic display. Depth change on binocular stereoscopic imaging is evaluated. Perceived depth is measured using method of adjustment. Perceived depth of 3-D image is found to decrease continuously when image blurring of a dominant eye is increased.
View full abstract
-
Tetsuya Kawamoto, Kanzou Nakamura, Kenji Mase, Shogo Tokai
Article type: Article
Session ID: 3DIT2009-76/IDY2009-
Published: October 21, 2009
Released on J-STAGE: September 20, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
The Chukyo TV works on the broadcasting use for the multi-camera movie, but its processing time and the cost become big problems on a practical side. The accuracy of the camera placement is ruined when giving priority to mobility. We research the method by the picture processing after shooting. When the simple correction software was developed, it was effective also in the correction of two eye stereo image. It reports on this background of the development and the problem in the future.
View full abstract
-
Masato MIURA, Jun ARAI, Junichi YAMAZAKI, Hisayuki SASAKI, Makoto OKUI ...
Article type: Article
Session ID: 3DIT2009-77/IDY2009-
Published: October 21, 2009
Released on J-STAGE: September 20, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We present a method of converting moving pictures captured by a single high-definition television camera mounted on the Japanese lunar orbiter Kaguya (SELENE) into stereoscopic images. As objects in moving pictures from the orbiter look as if they are moving vertically, vertical parallax is caused by time offset of the sequence. The vertical parallax is converted into horizontal parallax by rotating the images by 90 degrees. We derived the convergence points of the eyes geometrically and numerically, and confirmed that binocular parallax caused by time offset can be dominant for the lunar surface at distances of several hundreds of kilometers.
View full abstract
-
Article type: Appendix
Pages
App1-
Published: October 21, 2009
Released on J-STAGE: September 20, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Article type: Appendix
Pages
App2-
Published: October 21, 2009
Released on J-STAGE: September 20, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Article type: Appendix
Pages
App3-
Published: October 21, 2009
Released on J-STAGE: September 20, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS