ITE Technical Report
Online ISSN : 2424-1970
Print ISSN : 1342-6893
ISSN-L : 1342-6893
25.85
Showing 1-13 articles out of 13 articles from the selected issue
  • Type: Cover
    Pages Cover1-
    Published: December 14, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (12K)
  • Type: Index
    Pages Toc1-
    Published: December 14, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (45K)
  • Takayuki ONISHI, Ryota TANIU, Jiro NAGANUMA, Makoto ENDO
    Type: Article
    Session ID: VIS2001-96
    Published: December 14, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    We have developed an MPEG-2 encoding PC card for notebook PCs, which can be applied to high-quality MPEG-2 mobile applications. In this paper, a bi-directional video conferencing experiment between Malaysia and Japan is introduced, using our portable IP video transmission system with this PC card. Through this experiment we show that the system is feasible with long distance connections around 1.2Mbps.
    Download PDF (856K)
  • Daewoo Kim, Hiroki Takahashi, Masayuki Nakajima
    Type: Article
    Session ID: VIR2001-97
    Published: December 14, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Korean text regions are extracted from scenery images by using two kinds of features, contour and color. At first, a contour image is obtained by using Canny Edge Detector. Candidate of character regions are estimated by several conditions related to edges. Furthermore candidate of character regions are obtained by doing clustering the image in RGB color space. Finally, Korean text regions are obtained from each cluster image by using relative position of the regions inside each cluster image.
    Download PDF (722K)
  • Tatsuya Shimbo, Naoki Hashimoto, Hiroki Takahashi, Masayuki Nakajima
    Type: Article
    Session ID: VIS2001-98
    Published: December 14, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    A collision detection algorithm of three-dimensional objects has been proposed. It reduces the number of surfaces which have possibility of collision, and then detects collisions of the surfaces. The process of surface collision detection takes much time in the whole collision detection process. This papaer describes a parallel collision detection algorithm by using a distributed memory PC Cluster. Especially, surface collision detection is parallelized. A master node sends surface pairs to all client nodes. Our experiments show the efficiency of parallel processing.
    Download PDF (531K)
  • Kagenori KAJIHARA, Hiroki TAKAHASHI, Masayuki NAKAJIMA
    Type: Article
    Session ID: VIS2001-99
    Published: December 14, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper describes a rendering method which simultaneously renders volumetric models and polygonal models. It is also possible to render volumetric models interacted by polygonal models under the same paradigm. A ray-volume buffer which stores colors and transmittance of ray at each voxel is proposed. The ray-volume buffer treats with volumetric objects as textures. Therefore, the proposed method can be realized under conventional graphics pipeline by generating a ray-volume in advance. In contrast to Marching Cube algorithm, the proposed method can generate the scenes with the feel of volumetric material using a smaller number of surfaces, becouse it presents a volume as volume-boundary surfaces with textures made of the whole voxels.
    Download PDF (1049K)
  • Romanos Piperakis, Hiroki Takahashi, Masayuki Nakajima
    Type: Article
    Session ID: VIS2001-100
    Published: December 14, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper presents an efficient and realistic motion planning strategy for navigating through a virtual reality world. The collision avoidance scheme employed is based on the vector field paradigm. By using depth information from the rendering pipeline of the agent's viewpoint as sensor data, we can accurately and efficiently approximate a force field around obstacles. This field exerts a repelling force on the agent which depends on the distance, shape and size of the obstacles in its path. This field in conjunction with an attracting force exerted from the goal position enables us to calculate a collision free path between any two arbitrary locations. Since our navigation strategy is based on local, viewpoint-specific sensor information, the algorithm is suitable for autonomous agents in either static or dynamic, unstructured environments. Furthermore, the algorithm is simple and fast and could be easily extended to real world robots equipped with depth lenses.
    Download PDF (788K)
  • Akira Hiraiwa
    Type: Article
    Session ID: VIS2001-101
    Published: December 14, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    We propose the advanced service of tele-robotics on wireless 4〜5Gnetwork service with the Keitai. The service change ordinary style of human communication. We made a mono display terminal, a dual display terminal and a panoramic display terminal. We also experimented new interactive communication style using real robots as avatars. We described a bright future that EMG and EEG signals recognition methods will become human motion sensing.
    Download PDF (3505K)
  • Takekazu Kato, Takeshi Kurata, Katsuhiko Sakaue
    Type: Article
    Session ID: VIS2001-102
    Published: December 14, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we discuss a wearable assistant with a wearable active camera. This system called VizWear-Active can actively and robustly understand the wearer and his or her environment by camera control according to image processing and motion sensors. We propose face tracking for the VizWear-Active based on the ConDensation algorithm, which realizes real-time tracking by client-server distributed sampling. Furthermore, the tracking algorithm becomes stable against camera motion using motion sensors. We confirmed the face tracking in experiments by implementation on a prototype system for the VizWear-Active.
    Download PDF (1108K)
  • Takeshi Kurata, Masakatsu Kourogi, Takekazu Kato, Takashi Okuma, Katsu ...
    Type: Article
    Session ID: VIS2001-103
    Published: December 14, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we describe a novel user interface for wearable systems called "HandMouse" that enables us to use the wearer's hand as a pointing device by means of detecting and tracking the hand in live video sequences taken with a wearable camera. We propose a method to detect and track a hand robustly and in real-time by generating hand- and background-color models dynamically and by tracking the hand contour statistically. Then we briefly introduce two promising applications of the HandMouse interface: namely, a virtual universal remote control and OCR in real scene images.
    Download PDF (1043K)
  • Hideaki Maehara, Koji Wakimoto
    Type: Article
    Session ID: VIS2001-104
    Published: December 14, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Mobile Phones' functions and performance such as graphics or application interface are continuously being improved as mobile computing tool, as mobile phones are no longer only phones as communication tool. On the other hand, a latest survey about user needs shows that many users expect a role as pedestrian navigation to the mobile phones. Following such facts, we have considered 3-dimensional (3D) computer graphics navigation for pedestrian, "Mobile 3D Map", which utilizes 3D maps instead of conventional 2D maps. Because of two points, that is intuitively understandable information display and effective navigation based on easy direction identification, we described that Mobile 3D Map is superior to conventional navigation using 2D maps.
    Download PDF (885K)
  • Masaji KATAGIRI, Toshiaki SUGIMURA
    Type: Article
    Session ID: VIS2001-105
    Published: December 14, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    The authors are proposing a new personal authentication method especially suitable for mobile environment. Signatures are written in the air and they are captured by a video camera. Since video cameras have smaller size than tablet devices, it would be able to contribute to reduce terminal sizes. We built a prototype system and have evaluated our mechanism in some ideal environment. In addition of fundamental feasibility studies, the effects of camera angle offset has been evaluated. The results confirmed its feasibility and showed that our method is promising. The experiments revealed a unique property that signatures figured in the air are hard to be imitated.
    Download PDF (666K)
  • Type: Appendix
    Pages App1-
    Published: December 14, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (73K)
feedback
Top