ITE Technical Report
Online ISSN : 2424-1970
Print ISSN : 1342-6893
ISSN-L : 1342-6893
34.32
Displaying 1-20 of 20 articles from this issue
  • Article type: Cover
    Pages Cover1-
    Published: July 26, 2010
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (14K)
  • Article type: Index
    Pages Toc1-
    Published: July 26, 2010
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (73K)
  • Kengo ANDO, Noroshige FUKUSHIMA, Tomohiro YENDO, Mehrdad PANAHPOUR TEH ...
    Article type: Article
    Session ID: IE2010-49
    Published: July 26, 2010
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we propose a free viewpoint image generation method using multi-resolution cameras. Recently, making high resolution images are required a free viewpoint image generation. To realize this request, there is a method using all high-resolution cameras array. However, the method increases in camera cost. Then, we proposed a method using high and low resolution cameras to synthesize high resolution images of virtual viewpoints.
    Download PDF (1090K)
  • Ippeita IZAWA, Shun NONOSHITA, Kazuya KODAMA, Takayuki HAMAMOTO
    Article type: Article
    Session ID: IE2010-50
    Published: July 26, 2010
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    We previously proposed a method of generating free viewpoint images directly from multi-focus imaging sequences without any depth estimation. It is very effective for the method to be implemented to hardware such as FPGA. However, the number of BlockRAMs on our FPGA limits the image size to 64×64 pixels. In this paper, we extend our FPGA-based free viewpoint image reconstruction systems by using an on-board DDR SDRAM and processing the divided blocks of 64×64 pixels repeatedly. The system realizes our proposed method even for larger image size without great drawbacks. Some experimental results by using synthetic images are shown.
    Download PDF (1080K)
  • Li TIAN, Akira SUZUKI, Masashi MORIMOTO, Hideki KOIKE
    Article type: Article
    Session ID: ME2010-108/CE2010-31
    Published: July 26, 2010
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this work, we present a new global feature extraction method by using the composition and color information of images for web-scale similar image retrieval. The proposed feature is robust to transformation such as rotation and it is compressed to low dimensions by using Principal Component Analysis (PCA) for more efficient retrieval. Experiments we conducted illustrate that it is more robust and efficient than the GIST, a state-of-the-art global descriptor commonly used in similar image retrieval.
    Download PDF (957K)
  • Tomohiro HARAIKAWA
    Article type: Article
    Session ID: ME2010-109/CE2010-32
    Published: July 26, 2010
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper reports about a design and implementation of an infrared modulator/demodulator for remote appliance control. The modem dynamically modulates/demodulates almost all formats used by Japanese appliance makers.
    Download PDF (851K)
  • Takeshi KOBATAKE, Ichiro MATSUDA, Hisashi AOMORI, Susumu ITOH
    Article type: Article
    Session ID: IE2010-51
    Published: July 26, 2010
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In recent years, there have been many studies on Augmented Reality (AR) which overlays some visual information, such as CG images, on the real scene. As a tool for AR technology, this paper proposes an image presentation system in which the visual information is projected from a head-mounted projector onto a portable cubic screen. The cubic screen equipped with several infrared LEDs as invisible markers is captured by a head-mounted camera, and its 3D position and orientation are estimated using a particle filter based algorithm. According to the estimation result, a CG image corresponding to the user's viewpoint is computed and projected onto the screen. This process provides the user with a feeling that he or she is looking at a virtual object in the cubic screen.
    Download PDF (801K)
  • Yasutami Chigusa, Ryotaro Okabe, Taizoh Hattori
    Article type: Article
    Session ID: IE2010-52
    Published: July 26, 2010
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we present the combination of two well known techniques which are video synthesis and flocking behavior based on Boids algorithm. We introduce a new formulation of real-video based CG-animation. Our system can be used in the elaboration of games and special effects for movies. We have developed a system that makes possible to manipulate objects in the frames of a video while maintaining its natural appearance and complexity and allows us to multiply an object in the frame or control the pattern of its movement. The system accepts as input a video in format AVI and renders automatically another with new patterns from the original. Video Synthesis and Flocking Behavior are well known independent techniques but the combination of them was not researched yet. In this paper, we also present the speed-up and improving the quarity of movies. We finally got 10 times speed up.
    Download PDF (1130K)
  • Naoyuki AWANO, Koji NISHIO, Ken-ichi KOBORI
    Article type: Article
    Session ID: ME2010-110/CE2010-33
    Published: July 26, 2010
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In recent years, 3D-scanners have been used in a wide field, because 3D shapes can be easily created using them. However, Point Clouds created by using a 3D-scanner contain many defects due to occlusion or reflectance of the models. We propose a method of detecting defects for evaluation of Point Clouds. First, input Point Clouds are projected onto several 2D-textures. We extract many defects in the 2D-textures by applying edge detection. Second, we extract the defects by applying its results to input Point Clouds.
    Download PDF (1502K)
  • Akihiro YOSHINARI, Yuichi TANAKA, Madoka HASEGAWA, Shigeo KATO
    Article type: Article
    Session ID: IE2010-53
    Published: July 26, 2010
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Recently, a frame interpolation method that uses motion compensation receive attention. However, distortions occurr at the block boundaries because the motion vector is estimated by a block-by-block manner. Consequently, the image quality of the interpolated frame is degraded. In this thesis, we propose an interpolation method using motion estimation for reducing block noises. To improve the image quality, motion compensation blocks are further divided, and a new motion vector is calculated based on reliability criteria for each sub-block and motion vector is estimated by a block that changed phase and frames are interpolated based on those new motion vectors.
    Download PDF (1485K)
  • Yuya Yamazaki, Toshiyuki Yoshida
    Article type: Article
    Session ID: ME2010-111/CE2010-34
    Published: July 26, 2010
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    To obtain the highest possible quality in a video transmission under a bit rate constraint, the authors have proposed a coding technique which adaptively controls the number of frames per second and the amount of bits for a single frame, and maximizes an estimated mean opinion score (MOS) in the spatio-temporal domain. However, this technique assumes SIF-size videos as its targets, and an application to high-definition (HD) videos has not been investigated. This paper thus extends the technique to HD videos, and realizes the encoding scheme on an H.264 encoder. Experimental results for several test images will be given to show the effectiveness of our approach.
    Download PDF (742K)
  • Kyota Aoki, Hideyuki Uchiumi, Masaki Kimura
    Article type: Article
    Session ID: IE2010-54
    Published: July 26, 2010
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    There are barrier detection systems for automobiles. We cannot use the same systems for pedestrians, because of the limitation of camera placements, size, weight and power-consumption. We develop a barrier detection system for pedestrians. This paper proposes a single camera barrier detection system for pedestrians and discusses the implementation and the experiments in real environments. The proposed system is extension of classical single camera barrier detection system for cars. Our experiments shows the performance and the problems in barrier detection system using images.
    Download PDF (1248K)
  • Shinjiro MURAYAMA, Kyota AOKI, Hideyuki UCHIUMI, Masaki KIMURA
    Article type: Article
    Session ID: IE2010-55
    Published: July 26, 2010
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    With SURF technique, we examined necessary consideration to estimate a position by comparison with landmark and scene image. Scene image and landmark image are taken in a different device. The scene image is photographed with the camera of the navigation. Those landmarks are an image photographed with another camera or images obtained from WEB. There are some faults in GPS. The GPS is impossible of an estimate at the time of a stop by a direction. A big pinpointing error by the multi-path. When the GPS moves a long range after power supply interception, long time is necessary for re-pinpointing In the case of the walker visually impaired use in particular, these many problems are remarkable. We inspected it about technique of the precision improvement and ability in the true environment in the system which estimated a direction and a position using a method to compare an image in SURF.
    Download PDF (1664K)
  • Seishi TAKAMURA, Masaaki MATSUMURA, Hirohisa JOZAWA
    Article type: Article
    Session ID: IE2010-56
    Published: July 26, 2010
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Evolutive methods based on genetic programming (GP) enable dynamic algorithm generation, and have been successfully applied to many areas such as plant control, robot control, and stock market prediction. However, conventional image/video coding methods such as JPEG and H.264/AVC all use fixed (non-dynamic) algorithms without exception. The evolutive coding enables an automatic generation of pixel prediction algorithm. It is a radical departure from conventional "fixed algorithm", "man-made algorithm" and "hand-made programming" toward a new paradigm. In this report, we introduce a GP-based image predictor that is specifically evolved for each input image, which have been evolving day by day. We demonstrate its prediction performance. We also report about speeding up the evolution process using parallelization by a factor of 100-1,500 as well as improving the prediction efficiency by about 2%.
    Download PDF (840K)
  • Kyohei UNNO, Hisashi AOMORI, Ichiro MATSUDA, Susumu ITOH
    Article type: Article
    Session ID: IE2010-57
    Published: July 26, 2010
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    The H.264/AVC video coding standard alternatively switches motion-compensated prediction and intra-frame prediction on a block-by-block basis to achieve high coding performance. However, it doesn't allow joint use of spatial and temporal prediction within the same block. This paper describes a block-adaptive spatio-temporal prediction method which can exploit spatial and temporal correlations of video signals at the same time. In this method, a predicted value at each pel is generated by a linear 3D predictor which uses causal neighborhood in both the current and motion-compensated previous frames. When the causal neighborhood is within the block to be predicted, previously predicted values instead of the reconstructed ones are recursively used. In order to minimize the sum of squared prediction errors, a set of 3D predictors is iteratively optimized using the quasi-Newton method. Simulation results indicate that joint use of spatio-temporal prediction attains higher SNR than exclusive use of spatial or temporal prediction in a framework of the proposed method.
    Download PDF (1654K)
  • Keigo MUTO, Masaru TAKEUCHI, Jiro KATTO, Shinichi SAKAIDA, Kazuhisa IG ...
    Article type: Article
    Session ID: IE2010-58
    Published: July 26, 2010
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper proposes a no-reference type PSNR/SSIM estimation method applied to compressed image sequences. The proposal transforms the SSIM formula into a form without reference signals (i.e. original images), which consists of two parameters to be estimated from the bitstream. One is quantization error variance contributing to existing no-reference PSNR estimation, and the other is signal energy reduction caused by quantization which is unique to SSIM estimation. Experiments using actual images are then carried out and possible improvement methods are discussed.
    Download PDF (938K)
  • Tomoyuki WAKABAYASHI, Yuichi TANAKA, Madoka HASEGAWA, Shigeo KATO
    Article type: Article
    Session ID: IE2010-59
    Published: July 26, 2010
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In order to handle or manage digital video data easily, it is desirable to embed the attribute information such as copyright or IDs into the video itself. In this report, the intra-frame prediction mode of H.264/AVC is controlled for embedding the attribute data. The file size of the compressed video which is encoded using our technique is the same as that of the original H.264/AVC. Simulation results show that the proposed method is able to embed and extract binary data.
    Download PDF (1391K)
  • Article type: Appendix
    Pages App1-
    Published: July 26, 2010
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (86K)
  • Article type: Appendix
    Pages App2-
    Published: July 26, 2010
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (86K)
  • Article type: Appendix
    Pages App3-
    Published: July 26, 2010
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (86K)
feedback
Top