A newly designed time-division multiplexing (TDM) frame format based on ITU-T J.183 is proposed. High speed transmission for UHDTV (8K) cable broadcasting was achieved by adopting MPEG-2 TS basis channel bonding technology. A 181.2 Mbps signal transmitted by a 64 QAM and four 256 QAM channels was received error-free at our prototype set-top box through cable television operator's existing facilities.
Conventional three-dimensional (3D) displays have a problem regarding human 3D perception. There is a conflict between vergence and accommodation, which are physiological factors of human 3D perception, and this conflict causes visual fatigue. Therefore, super multi-view (SMV) displays have been developed to solve this problem. In SMV displays, the interval between viewpoints is reduced such that it is smaller than the pupil diameter of the eyes. Measurements of eye responses show that the SMV displays have the effect of enlarging the depth of focus (DOF) of eyes. When 3D images are displayed in the enlarged DOF range, the vergence-accommodation conflict does not occur. However, the SMV displays must generate highdensity viewpoints. This paper describes several SMV display systems, such as the focused light array system, the multi-projection system, the flat-panel system, the hybrid system, the time-multiplexing system, and the head-mount system. Future developments are also discussed.
We improved light efficiency and viewing zone depth for high-resolution floating autostereoscopic 3D display based on the iris-plane dividing technology. Newly developed light diffusion film and optical system sent almost all image light to the viewer and realize high light efficiency and uniformity. In addition, stacked liquid crystal optical shutters achieved control of the viewing zone and realized 3D images without crosstalk noise in the real space with wide depth of the viewing zone. Our floating 3D display system allows glassless and provides high-resolution 3D images with no crosstalk between left and right viewing zones. Therefore, this 3D display system is promising for the future high-presence interactive communication systems.
Free Viewpoint Image (FVI) is one of the most popular image format in the next generation of multimedia, and ray space is an effective and efficient method to generate FVI. However, in the traditional method of ray space construction, all the images have to be captured in advance so that the burden of data is quite heavy. In this paper, we propose to adopt compressed sensing to sparsely sense and reconstruct a ray space. Thus, it is not necessary to capture all the images but only fewer measurements are collected, and ray space can be reconstructed by employing optimization tools with sparsity promotion. Different from previous applications of compressed sensing in image acquisition, such as computational photography which is focusing on integral image, our work is based on Epipolar Plane Image (EPI). In our simulation, the ray space can be reconstructed successfully, and simulation results also illustrate the reconstruction performances from different numbers of measurement and different desired sparsities of EPI. Furthermore, since EPI presents unique structures, another dictionary which can represent this structure is also developed to take place of common orthonormal basis in compressed sensing procedure. Experimental results show that sparser representation of EPI can be achieved and better reconstruction can be obtained by using newly developed dictionary. Finally, the subjective testing results are also presented and FVI can be obtained from the reconstructed ray space.
In this paper, a 3D audio system using multiple vertical panning (MVP), which matches audio to a large-screen multiview 3D video display system, is proposed. The vertical position of sound images is synthesized by the panning between two loudspeakers placed at the upper and lower sides of the screen. The horizontal position of the sound images is controlled by the position of two loudspeakers. Using the proposed system, multiple viewers can simultaneously feel the sound images at the position of 3D objects. A listening test was used to examine whether viewers can perceive the synthesized sound images at the position between two loudspeakers. The results of an audio-visual experiment show that the proposed 3D audio system was effective as compared with a conventional system because viewers could always feel the synthesized sound images at the position of the 3D video object.
To develop advanced multi-modal displays, it is important that various sensory information is presented with proper amount. Previous studies have pointed out that vibration information enhances participants' immersive experience from virtual display. Therefore, vibration information should be considered when people develop advanced multimedia systems. We have focused on the effect of full-body vibration information to peceived reality. This study examined the relation between full-body vibration amplitude and perceived reality from audio-visual contents. The sense of presence, the sense of verisimilitude, and the sense of ultra-reality were used as indicators of perceived reality. Results revealed that perceived reality increased by adding full-body vibration. Moreover, results showed that the senses of presence and ultra-reality increased monotonically according to the full-body vibration amplitude, while the sense of verisimilitude exhibited a saturating curvilinear tendency. These results suggest that body vibration is important to increase perceived reality from multi-modal contents presented by advanced multi-modal displays.
We proposed a simple and accurate method for measuring the twist elastic constant K22 of liquidcrystal materials. The novel technique is based on the ellipsometry analysis, and determines the director profile of a twisted nematic liquid crystal cell in the on state. We validated this experimentally, and showed that it is effective for evaluating the response characteristics of liquid crystal displays.
While HMAX features have been proved to have excellent performance in image categorization tasks, the computational cost of recognition is expensive. If we aim to apply the HMAX features to object localization tasks, in which the categorization tasks are repeatedly performed by sliding windows, their processing time increases enormously. In this paper, we propose a method for speed up in computation of object localization based on HMAX features. We found that the HMAX features cause specific redundancies in the sliding window approach. The speed up is achieved by eliminating the redundancies in our method. The results from experiments using the University of Illinois-Urbana-Champaign (UIUC) car dataset and the face detection dataset benchmark (FDDB) indicate that this modification improved processing speeds significantly with insignificant reductions in precision.
We have developed a technology that enables the creation of TV-program-like CG animation generated automatically from text-based script. We call this technology "T2V (Text-To-Vision)" and have developed a PC application that we call the "T2V Player." The application allows users to create animations instantly by simply typing in text in a customary fashion as when writing on a word processor. The T2V Player employs a unique framework with a capability to accommodate any text processing module to realize any type of text-to-animation conversion. The T2V Player is not only an animation tool for writers creating animations from their texts, but also an application of various text-to-vision services such as automatic animation generation form Web sites and other text sources. This paper describes the concept of T2V and the technical specifications of the T2V Player and its benefits.
This paper proposes a free-viewpoint interface for mobile-robot teleoperation, which provides viewpoints that are freely configurable by the human operator head pose. The viewpoints are acquired by a head tracker equipped on a head mounted display. A real-time free-viewpoint image generation method based on view-dependent geometry and texture is employed by the interface to synthesize the scene presented to the operator. In addition, a computer graphics model of the robot is superimposed on the free-viewpoint images using an augmented reality technique. We developed a prototype system based on the proposed interface using an omnidirectional camera and depth cameras for experiments. The experiments under both virtual and physical environments demonstrated that the proposed interface can improve the accuracy of the robot operation compared with first- and third-person view interfaces, while the quality of the free-viewpoint images generated by the prototype system was satisfactory for expressing the potential advantages on operational accuracy.