ITE Technical Report
Online ISSN : 2424-1970
Print ISSN : 1342-6893
ISSN-L : 1342-6893
36.16
Displaying 1-50 of 64 articles from this issue
  • Article type: Cover
    Pages Cover1-
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (13K)
  • Article type: Index
    Pages Toc1-
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (207K)
  • Hiroshi AOKI, Jun MITANI, Yoshihiro KANAMORI, Yukio FUKUI
    Article type: Article
    Session ID: AIT2012-32
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Although a large number of studies have been made on 3D modeling in AR (Augmented Reality) space, lack of the tactile feedback when a user touches a model is still a problem, which causes a difference between real world and the AR space. In this paper, to pass over this problem, we propose an airbrush metaphor interface for 3D modeling. The user can edit a 3D model without feeling the differences since the airbrush does not have tactile feedback originally. The user constructs a model by spraying virtual small cubes in the AR space. The target area and the amount of sprayed cubes are adjusted by the distance from the airbrush-style device. Coloring and erasing features are also implemented on our system. We show some models constructed with our system.
    Download PDF (999K)
  • Shingo SAKAZAKI, Tomoya ITOH, Kouki ITOH, Tsukasa KIKUCHI
    Article type: Article
    Session ID: AIT2012-33
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we make it possible to move the ARDrone by the intuitive operation using the Kinect. Furthermore, we propose the recreation contents to be able to easily enjoy regardless of any people using this system. This contents is that operator lift "a title card" put in the ground with the hook using ARDrone and Kinect, and he must perform a problem written to the card.
    Download PDF (682K)
  • Taiki WATAI, Yoichi MATSUYAMA, Machiko KUSAHARA
    Article type: Article
    Session ID: AIT2012-34
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, a small robot composed of a PET bottle cap called Capbot will be introduced. This presents a system that human beings identify with artificial objects with affection by robotizing familiar daily materials which otherwise are to be thrown away. Also, we will propose a possibility of an interactive application that is able to set a daily scene without involving unusual feelings towards robots.
    Download PDF (853K)
  • Yukua KOIDE, Haruka OGUMA, Takuya SAKAI, Akihiko SHIRAI
    Article type: Article
    Session ID: AIT2012-35
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This article focused to share a new concepts and prototypes of karaoke entertainment system that is widely entertained in the world. Current karaoke system has three big problems and they will be solved by using multiplex-hidden digital signage "ScritterH". They are descrived as (1) hidden-multiplexed ordering terminal on the main screen, (2) Hidden-multiplexed user comments showing from network shared contents and (3) harmonized dance music game system with a karaoke music.
    Download PDF (447K)
  • Youhei YAMAMOTO, Jun MITANI, Yukio FUKUI, Yoshihiro KANAMORI
    Article type: Article
    Session ID: AIT2012-36
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Dissection Puzzles are a kind of puzzle in which different figures are mounted with the same pieces by changing their position. In this paper, it is presented a system for crating colored Dissection Puzzles .The input is a pair of figures with equal area. The figures are represented using a discrete grid of square lattice and may have different colors making them more expressive. The output is a set of pieces with which one can assemble, using all of them, one of the original figures. In addition, an algorithm used to assist input is presented, allowing users to input polygons and then converting them to pixilated figures. As an option, a part of the original polygon may remain with the output figure, increasing the resemblance between the output and input. Finally, some examples of puzzles generated by our system are shown.
    Download PDF (1046K)
  • Kenji Asami, Nobuhiko Takada
    Article type: Article
    Session ID: AIT2012-37
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
  • Toshiki NAGATOMO, Kouki ITOH, Tsukasa KIKUCHI
    Article type: Article
    Session ID: AIT2012-38
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we researched direction technique in the projection mapping that reflects what kind of picture and light in an element (e.g., including a curved surface and a plane, the ridgeline) constituting an existing object, we are whether the expression that let reality and imagination fuse is possible. And we really produced a work of the projection mapping.
    Download PDF (507K)
  • Sumiaki Ono, Andre Alexis, Youngha Chang, Masayuki Nakajima
    Article type: Article
    Session ID: AIT2012-39
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    The block toy, like LEGO, stimulates people's creativity and a broad generation enjoys it. However, assembling complicated models require many techniques and planning. Therefore, they often sell the brick sets with an assembly manual, which is created by professional modelers. In this work, we propose an automatic method to generate the design procedure of a block artwork from a 3D polygonal model.
    Download PDF (660K)
  • Satoshi NAKAJIMA, Mikio SHINYA, Michio SHIRAISHI, Hidetsugu KATSURAGAW ...
    Article type: Article
    Session ID: AIT2012-40
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Tai Chi Chuan is a popular Chinese martial art, practiced in all over the world. Its performance consists of complicated movements and most beginners learn the movements from videotexts. However, existing videotexts are not ideal: some of motion components can be hard to learn because of their limited camera angles and occlusion. In this study, we created a CG videotext that allows users to select the camera angles and the scene arrangements so that they can observe any motion components they want to see. The CG videotext was evaluated through user-tests, and it was found that the usability was improved from existing videotexts.
    Download PDF (1222K)
  • Akihiro INOUE, Kouki ITOH, Tsukasa KIKUCHI
    Article type: Article
    Session ID: AIT2012-41
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we utilized social network (SNS) and considered the production of the indies movie and the possibility of the promotion technique and inspected it with practice. As a result, good and bad points in the SNS utilization and matters that require attention became clear.
    Download PDF (429K)
  • Masato SHODA, Kosuke KABATANI, Tokiichiro TAKAHASHI, Tomoaki MORIYA
    Article type: Article
    Session ID: AIT2012-42
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    We have developed a system that visualizes simulation results of terraforming of the Mars. Terraforming aims to change and improve environments of the planet artificially, and to remodel the planets where people can live like the Earth. Our system can visualize the results of terraforming simulation of the Mars for the people who design new SF game plots but are not experts of terraforming. Real-time visualization can show them very realistic images of changes on the Mars caused by terraforming.
    Download PDF (1837K)
  • Takanobu MITANI, Issei FUJISHIRO
    Article type: Article
    Session ID: AIT2012-43
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    There exit two known methods used to generate sky images without shooting and/or painting. One is to rely on image search while the other is to make use of three-dimensional computer graphics software. Both are, however, inefficient in that it takes much time to edit sky images until we obtain desirable sky images. This article presents a system called CosmicAI (COntent-based Search and Myriad Image Composition with Acquired Information for sky background generation), which allows the users to start their edit work with sky images which have been registered by other users. This kind of information sharing and cooperative imaging enables us to quickly generate purposive sky images as well as to refine them effectively.
    Download PDF (1250K)
  • Daisuke Takamizawa, Kazunori Miyata
    Article type: Article
    Session ID: AIT2012-44
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Surface fur is an important element of the appearance of many fruits. Few researchers, however, have focused on fur. Previous research realized it using L-System, which is a parallel rewriting system. This is a powerful and flexible method, however, it is a difficult method. This paper proposes a new expressing technique for fruit fur.
    Download PDF (866K)
  • Tomoya OSUGI, Ryota OKANISHI, Hiroki YAMAMOTO, Tomoyuki TAKAMI
    Article type: Article
    Session ID: AIT2012-45
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    We have developed floor exertainment with a SOKUIKI sensor. In the system we produced a projector creates large screen image on floor and a SOKUIKI sensor adds touch function on it. We can move fast and jump high on the game application screen created on floor. We also used a heart rate monitor to adjust exercise to match fitness level or physical training.
    Download PDF (460K)
  • Hiroki YAMADA, Yoshihiro KANAMORI, Yukio FUKUI, Jun MITANI
    Article type: Article
    Session ID: AIT2012-46
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Relighting has been used in many image productions. Relighting requires object shape information and reflectance, but we have to manually retouch or use special equipment in previous work. Therefore we present a novel system for relighting by Kinect which is a relatively low-priced depth camera. We capture target objects and calculate their shapes. Then we estimate reflectance from illumination that is calculated from an environment map taken at the same place. Finally we relight the object using new illumination. We can adjust direction and intensity of the light by user interaction. This system enables the user to relight objects without complex operation or high-priced equipment.
    Download PDF (679K)
  • Makoto KANAKUBO, Kouki ITOH, Tsukasa KIKUCHI
    Article type: Article
    Session ID: AIT2012-47
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we propose that we capture the motion data using Kinect and blend a noise. And we can automatically generate a motion pattern and suggests the file (line) and dance by the crowd.
    Download PDF (441K)
  • Shoko KAYAMA, Yoshimasa TOKUYAMA, R. P. C Janaka, Kouichi KONNO
    Article type: Article
    Session ID: AIT2012-48
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    The current modeling methods have difficulty in lacking intuitivity and interactivity, which exploits two-dimensional coordinate input device (e.g. mouse, pen tablet) to modify the 3D shapes. In our research project, intuitive and interactive shape modification system is to be constructed by the use of three-dimensional coordinate input device-haptic display. Adding the feedback of force sense when modifying shape in three-dimension, we have intended to integrate visual and haptic senses. In the way, we try to perform the system that can be controlled more intuitively. In this research, we construct a bending and twisting deformation system for 3D shapes with two devices operated by two hands. We place great importance on the feedback of torque (the tendency of a force to rotate an object about an axis) to construct the system.
    Download PDF (719K)
  • Rianti HIDAYAT, Kunio KONDO, Koji MIKAMI, Akinori ITO, Kengo WATANABE
    Article type: Article
    Session ID: AIT2012-49
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Character making using silhouette lately has been practically used in the entertainment industry. Silhouette itself consist a lot of elements from shape, proportion, size, pose and good for creating stylized characters. However, in order to use silhouette technique, ones need some training and experience. This research focused on assisting ideation in character design process, by studying relation between character's silhouette shape and role. First, we collect 100 characters from animation box-office list, convert them into silhouette and analyze their basic shape. We found that there's basic shape pattern, body-part ratio for different role. We also plan to expand the analysis to find shape relation with personalities. With this data, we will provide design guides, balance ratio for creating stylized character, and expand designer's design.
    Download PDF (776K)
  • Nozomi TANAKA, Ryuta MOTEGI, Koji MIKAMI, Kunio KONDO
    Article type: Article
    Session ID: AIT2012-50
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Human character designs portrayed in Animation have different proportion compare to real life human. Those character proportion balances are important elements in animation concept and become each animation's special characteristic. Common character-making process starts with producer creating contents background documents for discussion. Designer then starts creating character appearance based on those texts documents. But it proven hard to realize the image in producer's mind, especially in terms of body proportion. This communication gap become problem and slow the whole production process. This research focused in translating producer's vision into visual documents for character-making. We provide design support system to help drafting deformed human character for animation production. This research aimed to (1) find out character proportion's trend, (2) apply it into simple template on 3D software system, (3) and to be used interactively on design discussion between producer and designer.
    Download PDF (962K)
  • Koyuki Hayakawa, Hiroki Takahashi
    Article type: Article
    Session ID: AIT2012-51
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper aims for swapping face images for other persons' face images based on time-sequential coherence facial expression transformations to protect portrait rights and privacy. Facial feature points are extracted by ASM (Active Shape Model) to obtain face parts those are a face base, eyes and a mouth. The extracted parts are distinguished by AU (Action Units) corresponding to "Happiness" and "Sadness." A slope and a distance are calculated using facial feature points for searching the parts corresponding to the facial expression. The parts positions are estimated by summations of errors at all feature points belonging to the each part. They are allocated when the errors are minimized. As a result, face swapping images based on time-sequential coherence facial expression transformations are generated.
    Download PDF (443K)
  • Yuta Imai, Yoshihiro Kanamori, Yukio Fukui, Jun Mitani
    Article type: Article
    Session ID: AIT2012-52
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we propose a system for generating inbetween frames semi-automatically. Inbetween frames are generated by making stroke correspondence relationship and stroke interpolation. If an object is rotating on key frames, strokes that do not have correspondence relationship appear and it is difficult to build correct correspondences between strokes automatically. Thus we input "Atari" constructed by a cross-hair and an ellipse for correspondence hints by the user. To build the correspondence relationship, we parameterize the position and shape of the strokes by "Atari". We show that the user can create inbetween frames by simple inputs using our system.
    Download PDF (681K)
  • Takaaki HARA, Tatsuo UNEMI
    Article type: Article
    Session ID: AIT2012-53
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper proposes efficient methods to draw a flocking behavior as visual effects for video editing. The computational cost of simulation for flocking behavior is usually expensive for real-time rendering required to practical use of video editing. The proposed methods include space division, group organization, and interpolation between key frames. Through experimental trials in a variation of settings, it was clarified to be effective to reduce the size of cache and the computation time.
    Download PDF (813K)
  • Yusuke HOUKI, Tomokazu ISHIKAWA, Yonghao YUE, Tomoyuki NISHITA
    Article type: Article
    Session ID: AIT2012-54
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we propose a method for smoke animation based on an input image, such as photograph. Using the input image, we calculate the required values, such as position, shape or density of the smoke, to characterize the smoke of the input image. We calculate and automatically adjust some parameters required to create the gaseous fluid animation resembling the input image, because we cannot obtain these parameters directly from the input. By using our method, users can obtain more similar animation to the intended result and acquire it more intuitively than the result in case of manually tuning all of the simulation parameters.
    Download PDF (790K)
  • Pornprom MANEELOK, Akinori ITO, Koji MIKAMI, Kunio KONDO
    Article type: Article
    Session ID: AIT2012-55
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Brand extension strategy is not only driving consumer product marketing to change but also influencing movie industry. Numbers of movie franchises in box office is a proof how popular it is. Besides, in the marketing point of view, making sequels from successful one is less risky than starting from scratch. However, keeping their works greater than or equal to the previous product is tough problem. The purpose of this study is to find proper screenplay writing technique for movie franchise. Researchers decide to focus on finding similarity of the duration of each scene between the first movie and the rest of sequels in order to identify rules in each franchise. The researchers go through movie franchises and their screenplays then extract essential components, for instance, scene properties, characters' appearance etc. The representative samples were selected based on IMDB's top 30 worldwide box offices. Then we conduct a research by comparing characters' dialogue and scene's length in each selected franchise. We found that the scenes with 2 persons or higher have a reasonable relation between characters' dialogue and scene's length with the correlation coefficient values between 0.6-0.8. From the result, it can be used to help predicting how long each scene should be by calculated from character's dialogue.
    Download PDF (675K)
  • Yuki OKURA
    Article type: Article
    Session ID: AIT2012-56
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Conduct constituted by 2DCG based on cels, the 2D animation was produced again in the 3DCG Based on the Footage including cels, make readjustment to S3D without creating objects and animation, video converted into almost all the S3D animation. Adjustments made for material processing and image conversion method using S3D and when it points out that most of the problems can not be rebuilt S3D. Consider the S3D animation director also unsuitable.
    Download PDF (812K)
  • Koichi TAKEUCHI, Takaaki YOSHIOKA
    Article type: Article
    Session ID: AIT2012-57
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Generally 3D stereoscopic pictures are made by 3D 2cameras. But if we look at the 2D photography with single eye, we can get natural reality 2.5D images. It is the wonder human brain effect. Also, we look at famous oil painting picture with only one eye, we can get natural reality beauty. We have made special 2.5D natural reality viewing grasses with low cost for both eyes viewing.
    Download PDF (296K)
  • Koki Sato
    Article type: Article
    Session ID: AIT2012-58
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This is the image works of composition of the 3D image and video image. The realistic old memories of the stone are generated from the reconstructed image of hologram "a stone" and the video image.
    Download PDF (479K)
  • Naoto YOSHIDA, Yusuke HOSOI, Tokiichiro TAKAHASHI
    Article type: Article
    Session ID: AIT2012-59
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In order to reconstruct urban landscape, we propose an efficient matching method of feature points for three dimensional reconstruction from multi-view images. Three dimensional reconstruction from images is one of the most important and central topics of computer vision. Many studies have been reported various methods regarding 3D reconstruction from a single perspective image, stereo-view and multi-view images, and video clips, and so on. In this paper, we propose a fast method to improve the feature point matching performance for three-dimensional reconstruction from multi-view images. Our method increases the number of matching computation of feature points of multi-view images, even if the number of images is a couple and their viewpoints are moved widely.
    Download PDF (1225K)
  • Jing Geng, Hirokatsu Shimizu, Naokazu Aoki, Hiroyuki Kobayashi
    Article type: Article
    Session ID: AIT2012-60
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Tone reproduction, an input/output property of light-dark information is one of the most important image qualities of an image system. Optimum tone reproduction curves have been proposed for imaging systems. But in paintings the tone reproduction itself has not been discussed because of difficulty of its quantification. In the present paper, it was attempted to obtain optimum tone reproduction curves for oil paintings by producing oil paintings made by non-photorealistic rendering which are acceptable as oil paintings for everyone and by asking to evaluate their tone reproduction subjectively.
    Download PDF (477K)
  • Atsushi MIYAZAWA
    Article type: Article
    Session ID: AIT2012-61
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (104K)
  • Masayuki Nakajima
    Article type: Article
    Session ID: AIT2012-62
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this invited talk, I introduce my 40 years history of Image processing research work and future plan. I started image processing research work just after my belonging Imaging Science and Engineering Laboratory as Doctor course student in Agui Laboratory in 1971. However, I have to stop my research work in Tokyo Institute of Technology by end of March, 2012 because of my retirement. So it becomes 40 years history. I introduce my Image processing research work as Computer Image Processing, Computer Animation, Computer Graphics, Image Coding, Human Interface, Multimedia, Virtual Reality Technology, Intelligent Agents project, etc. Finally, I introduce my future plan in new Nakajima laboratory.
    Download PDF (2726K)
  • Shigeyuki Wachi, Takeshi Yamaguchi, Hiroshi Yoshikawa
    Article type: Article
    Session ID: AIT2012-63
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    he hologram is records a three-dimensional information as interference fringes. It can provide an optical image completely. Since a general flat hologram has a limited viewable area, we usually cannot see the other side of a reconstructed object. As this solution the multiplex hologram (MH), or the cylindrical holographic stereogram has been reported. MH is made from parallax images captured by camera, and able to record object that is difficult to be recorded as the general hologram. However, therefore the producing flexibility is low because, the size of MH depends on the optics. In this paper, we produce the computer-generated multiplex hologram. As a result, we have improved the producing flexibility and investigated the optimization of the reconstructed images.
    Download PDF (553K)
  • Hijiri Sueishi, Takeshi Yamaguchi, Hiroshi Yoshikawa
    Article type: Article
    Session ID: AIT2012-64
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Computer-generated hologram (CGH) of the real object is able to be fabricated, if the 3D object data can be obtained from the real object. The published paper says that the portrait CGH can be output, whose object data are made by the 3D scanner. Making the object data from the laser 3D scanner, however, requires the much time for the needs of mannual correction. Also, the Kinect, as an economical 3D measuring device, is employed for the real-time CGH. Conducted researches only display the small reconstructed image. This paper proposes fabrication of the hologram from the real object, which reconstruct the large image size and wide viewing angle, with the computer-generated image hologram. The object data converted from the Kinect are used for the calculation of CGH.
    Download PDF (639K)
  • Hiroyuki OZAWA, Takeshi YAMAGUCHI, Hiroshi YOSHIKAWA
    Article type: Article
    Session ID: AIT2012-65
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    We have been investigating the real image reconstruction from the computer-generated hologram (CGH). The real image offers strong dimensional impression to the viewer. Furthermore, we produced the half cylindrical CGH that offers real images with the horizontal viewing angle of nearly 180 degrees. However, its vertical viewing angle isn't enough as the practical display. In this report, we improve the pixel pitch of fringe printer to 0.35 μm from 0.44 μm. In this way, we are able to make the vertical viewing angle 28.8 degrees from 9.2 degrees. It also makes increasing hologram size possible. Furthermore, we are able to improve a reconstructed image by changing the material of the mirror from SUS to the acrylic.
    Download PDF (550K)
  • Tatsuto KIMURA, Shigeo HIYAMA
    Article type: Article
    Session ID: AIT2012-66
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    It is stereogram by 3DCG which I developed in Lithuanian Geola company.
    Download PDF (502K)
  • Mika HIRANO, Ryota TAKEUCHI, Taichi WATANABE
    Article type: Article
    Session ID: AIT2012-67
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    "AVATAR" and more S3D contents, S3DThechnology somebody's attention, but there have many problems. The main system is use to bionics parallax. A little player can't watch S3D although their visual function are clean. There don't try to find the cause of by S3D contents or personal ability. I watch to S3D ability by personal and difference factor by players. I Think assume need to absorb personal equation and stereoscopic effect emphasizing and turning down. Now investigate into relation vision in both eyes to add focus effect on S3D contents. Then propose needing revision and there guide to S3D contents.
    Download PDF (540K)
  • Yutaka NAGAMI, Kouki ITOH, Hirosato SEMBO
    Article type: Article
    Session ID: AIT2012-68
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Traffic Accidents from drowsiness are often caused by the deterioration of drivers' concentration. Drowsiness results from a monotonous driving environment. Long tunnels are problematic in that they feature long stretches of monotony. It feels rhythm generally when a certain element is repeated with the same condition. On the contrary, it is monotonous and feels it when there are a few changes. Therefore relation of a pattern and rhythm was clarified when it was watched in a dynamic viewpoint. As for the interval of a repeated figure, constant agony decreases. Size of a figure and a change of a position of a perpendicular course influence a feeling of rhythm. With the result, we designed the pattern of the tunnel wall surface which directed a rhythm feeling.
    Download PDF (865K)
  • Atsushi KOKUBO, Hitoshi TSUNODA, Shigeru Wajima
    Article type: Article
    Session ID: AIT2012-69
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Modern museums (art and science) are designed in a way to make them accessible to both able and handicapped people. This "universal design", however, does not take into account the problem blind people have when appreciating images and pictures. In this paper, we present our methods for creating "Tactile Pictures"; semi-three-dimensional representations of two-dimensional images using image processing, CAD technology, 3D printing and 3D cutting machinery, developed in collaboration with the Aomori Museum of Art. The methods presented are: 1) translating the brightness in paintings into heights for engraving, 2) creating carved blocks from woodcut prints, and 3) relief representations of theatrical stage backgrounds. Finally, we obtained feedback from blind volunteers to assess the quality of our method and improve it. The resulting pieces were displayed in the "Touch the Art!" exhibition at the Aomori Museum of Art.
    Download PDF (588K)
  • Touhei SHINOMURA, Genki UMIBAYASHI, Takashi NISHIDE, Tomoyuki TAKAMI
    Article type: Article
    Session ID: AIT2012-70
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    We produced wall amusement with a projector and SOKUIKI sensors. A projector creates large screen image and SOKUIKI sensors add touch function on it. Two mirrors are set up on both sides of screen display to make radar beams of a SOKUIKI sensor reflect inside. Detection accuracy of touch position on wall can be enhanced by this set up. We used a ball in the present amusement.
    Download PDF (371K)
  • Noriyuki ABE, Masahiro TOYOURA, Xiaoyang MAO
    Article type: Article
    Session ID: AIT2012-71
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    We present a new technique for artistic style transferring by considering the perceptual structure matching between source image and target images. Given a photograph, our technique first searches for an existing art work on a similar scene through using the computational model of GIST perception and then transfers both the color and brush texture from it by considering the structure matching between two images. The proposed method is fully automatic and can be used for transferring any styles.
    Download PDF (1165K)
  • Shota Kamimura, Hiroki Takahashi
    Article type: Article
    Session ID: AIT2012-72
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper aims for a relief observation method as a digital archive activity. Relief images are taken photos from several angles corresponding to view points. Face API and a single Web camera are used to obtain a two dimensional head positions. A relief image depending on the position is shown. A fixed relief is observed through a monitor by head motions as an interactive human interface.
    Download PDF (359K)
  • Tomoya SAIDA, Keita USHIDA
    Article type: Article
    Session ID: AIT2012-73
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper the authors report on a shader which generates images with characteristics of silhoette cutting (kamikiri) from 3D models. Since silhoette cutting consists of contour and some slits, it is important to find appropriate slits to reproduce the characteristics. To do this, the shader employs the depth map and the normal map of the scene. Real silhoette cuttings are also produced by using cutting plotters from generated images.
    Download PDF (616K)
  • Taiki KITADA, Akihiko SHIRAI, Takayuki SUZUKI
    Article type: Article
    Session ID: AIT2012-74
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This article reports about a new visualization system of radiation in cloud chamber using video processing. Radioactive pollution from Fukushima plant is an issue to be shared in social and global for a long time. Visualization solution has a value to tell the risk and mechanism of radiation and if it can be configured by daily materials, it can avoid radiation exposure for wider range of people like a science workshop tool.
    Download PDF (639K)
  • Ryuichiro Takashima, Hiroaki Kozuka, Nagatoshi Hayabusa, Hiroki Takaha ...
    Article type: Article
    Session ID: AIT2012-75
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper aims for preventing oversight of an incorrect direction for electrolytic capacitors, in order to assist visual inspections on an electronic board. Contours in a captured image are extracted. Electrolytic capacitors have a feature as a circle. Electrolytic capacitors are detected by calculating a degree of circular to the extracted contours. The upper parts of the detected electrolytic capacitors are colored as visual stimulus. A colored visual stimulus is possible to assist the oversight prevention to an incorrect direction.
    Download PDF (579K)
  • Genki Momose, Hiroki Takahashi
    Article type: Article
    Session ID: AIT2012-76
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper aims for analyzing gait to investigate feature values related to gait transformation using three dimensional human information in order to conceal a personal identification. Gait is one of biometric information, and it is an effective method for individual certification because of being able to recognize it from a faraway place. The Kinect^[○!R] is used to obtain gait information. Individual motion periods are transformed into frequency domain by the Fourier transform. Individual motion periods are calculated by the upper four power spectrums, and the obtained periods are as walking cycles. Obtained thirty one gait information is analyzed by the principal component analysis. As a result, a maximum right elbow angle shows the most effective factor on a human gait feature.
    Download PDF (398K)
  • Masateru OIKAWA, Yoshihiro KANAMORI, Yukio FUKUI, Jun MITANI
    Article type: Article
    Session ID: AIT2012-77
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    When creating 2D illustrations, painting is quite important because different painting of the same line drawings can give completely different impressions. To assist in painting of line drawings, we propose two methods that can transfer the painting information of a reference illustration to an un-painted line drawing. Both input images are first segmented into closed regions, and the rough shape of each region is estimated as a normal map. The user can select corresponding regions between the two input images, and transfer painting information based on the normal maps, just like using a color palette.
    Download PDF (948K)
  • Tomoya SAWADA, Masahiro TOYOURA, Xiaoyang MAO
    Article type: Article
    Session ID: AIT2012-78
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Film comic is created from animation movies by selecting important frames, trimming the frame images to fit into the panels of comic and placing speech balloons at appropriate positions on the panels. Conventional technologies aiming at automatically completing these tasks based on low level image features only fail to produce good results, because the detection of important frames and important areas in each frame requires the understanding of movie and image contents as well. We propose to detect important frames and their important areas based on viewers' eye-tracking data for automatic creation of film comic. The results of subject studies support the proposed method in selecting important frames for well conveying the story, holding the structure of original frame images, and arranging balloons without occluding important areas.
    Download PDF (630K)
  • Shoutaro SAKAMOTO, Yuhi NAKAZAWA, Makoto J. HIRAYAMA
    Article type: Article
    Session ID: AIT2012-79
    Published: March 09, 2012
    Released on J-STAGE: September 21, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Realistic stereoscopic 3D images were tried to be created, by putting 3D computer graphics and edited picture images together.
    Download PDF (1520K)
feedback
Top