-
Yamato MIYASHITA, Yasuhito SAWAHATA, Toshiya MORITA, Kazuteru KOMINE
Pages
12A-1-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
A depth compression method is a promising way to achieve all-in-focus presentation of deep 3D scenes on an integral photography display. In this study, we propose an omni-directional depth compression method that provides 3D scenes within a smaller depth range maintaining subjective quality with a conventional uni-directional method.
View full abstract
-
Miwa KATAYAMA, Yasuhito SAWAHATA, Kazuteru KOMINE, Tomoyuki MISHINA
Pages
12A-2-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We are investigating integral image systems with high resolution. Using a simulator of Integral Photography with stereoscopic images, the subjective evaluation experiments showed the relationship between the pixel pitch and the blur of Integral Photography.
View full abstract
-
Kensuke IKEYA, Hayato WATANABE, Naoto OKAICHI, Hisayuki SASAKI, Kensuk ...
Pages
12A-3-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We propose a 3-D modeling method adapted to non-linear depth-compression for integral 3-D imaging. The method is possible to generate 3-D models by efficient spatial sampling based on inverse function of non-linear depth-compress function in depth estimation process. In an experiment, we reconstructed 3-D model and integral 3-D images and confirmed that the depth of 3-D images were effectively compressed with less distortion by using the method.
View full abstract
-
Seiya SUDA, Kazuya KODAMA, Takayuki HAMAMOTO
Pages
12A-4-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper, first, we show ideal appearances of our proposed dense multi-view systems transmitting light fields beyond obstructions, where some scenes include various depths. Then, reconstruction quality of our practical implementation on a GPGPU for real-time interpolation of light fields from inexpensive sparse camera arrays is evaluated in detail.
View full abstract
-
Motohiro MAKIGUCHI, Hideaki TAKADA, Toru KAWAKAMI, Mutsumi SASAI
Pages
12A-5-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We propose a tabletop display system that projects 3D contents without special 3D glasses. In the proposed system, we reduce the number of projectors than conventional systems by using linear blending. We implemented a prototype that enables observers to view 3D contents from 360 degrees using 60 projectors.
View full abstract
-
Atsushi MATSUI, Naotsuna FUJIMORI, Takahiro MOCHIZUKI
Pages
12B-1-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
A combined geo-search method to estimate characteristic location s appearing in given video content is proposed. The combined approach is composed of two stages: a geo-coding stage based on named entity extraction of each place showing in the closed captions, and a scenery image matching stage using Google Street View panorama images of those places. In the second stage, cosine similarities of GIST descriptor s contributes to reducing estimation errors compared to the reciprocal mean di stance of matched pairs of akaze descriptors and color histogram correlation.
View full abstract
-
Hideaki Yanagisawa, Takuro YAMASHITA, Hiroshi Watanabe
Pages
12B-2-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
Studies to recognize the contents of comic books have been conducted to generate comic metadata. Especially,
character is one of the important factors for understanding. In this research, for the purpose of improving the clustering accuracy about character face images, we examine the effect of background area removal.
View full abstract
-
Saya NISHIMINE, Hiroshi IKEOKA, Takayuki HAMAMOTO
Pages
12B-3-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We propose a depth estimation method based on blur ratio of monocular images . It is possible to realize wide-range depth estimation using tilted lens optics. Moreover the estimation accuracy is improved by our proposal which is utilizing multi blurred images from each color channel with chromatic lens.
View full abstract
-
Tatsuya NAKAMURA, Katsuto NAKAJIMA
Pages
12B-4-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper, we propose a method of combining particle filter and YOLOv3 for player tracking to cope with illuminance fluctuation. The colors of a specific player to be tracked in a soccer broadcast are dynamically determined in calculating likelihood of the particles according to the variation of brightness around or at the player s location detected by YOLOv3.
View full abstract
-
Hiroyuki YAMAGUCHI, Toshinori OTAKA, Takayuki HAMAMOTO
Pages
12C-1-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
In recent years, global shutter CMOS image sensors have been used in various applications. For these sensors, various noise reduction methods have been proposed, but most of them are for fixed pattern noise. Therefore, we proposed a method to reduce random noise by using multiple sampling.
View full abstract
-
Takuya NABESHIMA, Keita Yasutomi, Keiichiro KAGAWA, Taishi TAKASAWA, S ...
Pages
12C-2-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
In this research, low noise nondestructive readout pixels are developed. In the conventional pixel structure, since reading is performed by the floating diffusion , the charge once read is lost. In the proposed pixel structure, a signal is nondestructively read using a floating gate. Measured characteristics and pixel simulation are described.
View full abstract
-
Syunta NAKANISHI, Keita YASUTOMI, Nobukazu TERANISHI, Shoji KAWAHITO
Pages
12C-3-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
A fully depleted type SOI pixel image sensor having a thick depletion layer (≧ 150μm) is suitable for x-ray and near infrared sensing. In this research, to reduce reset noise, we studied the pixel structure of a charge transfer type SOI pixel which can store and transfer charges inside a pixel.
View full abstract
-
Shigeyuki IMURA, Keitada MINEO, Yuki HONDA, Toshiki ARAI, Toshihisa WA ...
Pages
12C-4-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We fabricated a 8K image sensor overlaid with a crystalline selenium (c-Se)-based photoconversion layer. Two-step annealing was adapted to reduce the stress, which generates when we crystallize the selenium film, leading to the reduction of the pixel defects in the captured image.
View full abstract
-
Toshiki Arai, Hiroshi Ohtake
Pages
12C-5-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
The dark current of a floating diffusion in three transistor pixel circuits was evaluated for 8K image sensors with a photoconductive layer A dark current of 510 electrons/s/pixel was obtained after measuring the test element group using cell p-well and FLD layers.
View full abstract
-
Masaki TAKAHASHI, Shinsuke YOKOZAWA, Hideki MITSUMINE, Tetsuya ITSUKI, ...
Pages
12D-1-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We developed a system that visualizes sword-tip trajectories in fencing matches. The system tracks the tips of the swords in an infrared image and composites their trajectory CG on the broadcast image in real-time. The system was operated and broadcast at the All Japan Fencing Championships 2017 for the first time.
View full abstract
-
Yasuaki KAMEDA, Yasuaki TANAKA, Megumi ISOGAI, Daisuke OCHI, Hideaki K ...
Pages
12D-2-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We are developing technique that hides everything other than the players or athletes they are most interested in for enhancing their excitement as they view the players. Based on this technique, we created an experimental field reconstruction.
View full abstract
-
Yasuaki Tanaka, Megumi Isogai, Akio Kameda, Hideaki Kimata
Pages
12D-3-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
As a fundamental study of a cable-less multi camera synchronous photographing, we made a prototype of wireless multi camera system in which GPS signals are used for synchronization. We evaluated how many cameras are supported in terms of video frame rate and encoding bit-rate.
View full abstract
-
Kohei TAHATA
Pages
12E-1-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Tsubasa UCHIDA, Hideki SUMIYOSHI, Taro MIYAZAKI, Makiko AZUMA, Shuichi ...
Pages
12E-2-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We developed the prototype system that automatically generating Japanese sign language (JSL) computer graphics (CG) animation based on metadata distributed during sports game. Except for the delay, it was confirmed that CG animation can be automatically generated and displayed in real time as the game progresses through the demonstration experiments.
View full abstract
-
Sayaka TERANAKA, Masahiro WATANABE, Yuji MORINISHI, Kiyoshi NAKAHAMA, ...
Pages
12E-3-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We are developing WEB Design Converter system which converts Web contents for PC to contents for digital signage. It is possible to lay out automatically according to the amount of information and the display size. With this system, easy to read content can be easily created and displayed.
View full abstract
-
Masahiro WATANABE, Junichi SAWASE, Sayaka TERANAKA, Yuji MORINISHI, Ki ...
Pages
12E-4-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We are developing WEB Design Converter system which automatically adjusts the layout of Web contents for digital signage. By the system, we found that it is possible to create contents more efficiently than the conventional method. Moreover, we got a lot of responses that they are easy to read, so we are able to confirm the effectiveness of the system.
View full abstract
-
Yuka SHIMMURA, Katsuto NAKAJIMA
Pages
13B-1-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We recognize string fingerings of expert string player by an RGBD camera and attach them to each musical note. The data is used as learning data for a system that can automatically determine proper string fingering for musical score.
View full abstract
-
Yukito WATANABE, Jun SHIMAMURA, Atsushi SAGATA
Pages
13B-2-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
Image descriptors based on activations of CNN have become dominant in object retrieval. However the problem becomes significantly more challenging if the details are missing due to low-resolution. In this paper, we propose an end-to-end network which combines convolutional image super-resolution and image representation.
View full abstract
-
Atsushi NAKADAIRA, Shigeru FUJIMURA, Shigenori OHASHI, Hiroki WATANABE ...
Pages
13B-3-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We have proposed a digital media content management system based on the blockchain technology. In this system, metadata such as copyright information or identification of a media content, not a content itself, is registered. It is also important to manage a content entity. It is useful to add the rights information registered in the block chain to the content itself to confirm legitimate use of the content. We propose a method of managing contents entities by a method of adding video watermark to the entities.
View full abstract
-
Shinya TAKEUCHI, Shuhei TAGUCHI, Hiroki ENDO, Kazuya FUJISAWA, Kenjiro ...
Pages
13B-4-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We proposed a media-unifying platform capable of automatically selecting an appropriate medium to facilitate consistent end-user experience, irrespective of the content distribution medium. In this paper, we report experimental results of user's behavioral pattern to validate that proposed platform is efficient for viewers to find out broadcasting program from sharing information.
View full abstract
-
Taro MIYAZAKI, Kiminobu MAKINO, Yuka TAKEI, Hideki SUMIYOSHI, Jun GOTO
Pages
13B-5-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We developed TV program retrieval system with input query as Tweets. TV summaries, which are used as a clue for TV program retrieval, and Tweets have a limit in their length, so it is difficult to retrieve TV programs from Tweets. Therefore, we developed novel retrieval system with query expansion using semantic relations dictionary. As a result, our retrieval system can output broad, various TV programs.
View full abstract
-
Yosuke HORI, Tomomi TAKAGI, Toshikatsu SAKAI, Mitsuru Nakata, Hiroto S ...
Pages
13C-1-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We are developing a vertically-stacked organic image sensor that is overlaid with three different organic photoconductive films, each of which is sensitive to only one of the three primary colors (red, green and blue). We use transparent TFT (thin-film-transistor) circuit for signal read out. In this study, we fabricated 20-μm-pixel pitch TFT array for readout circuit. Consequently, sufficient transfer characteristics of the TFTs were obtained.
View full abstract
-
Aiki KAMEYAMA, W. S. Hee, Ayaka KIMURA, Makito HARUTA, Toshihiko NODA ...
Pages
13C-2-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We developed a lensless CMOS imaging device to observe two types of fluorescent proteins for FRET imaging. To achieve ultra high excitation light rejection performance, we fabricated a hybrid filter composed of an interference and an absorption filter. The fluorescence colors are separated by a two color pattern under the hybrid filter.
View full abstract
-
Jiro NAGAO, Mariko YAMAGUCHI, Hidenobu NAGATA, Kota HIDAKA
Pages
13C-3-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
A real-time and robust object extraction method using RGB-infrared (IR) camera is proposed. An IR camera and IR lights are used to help separating the foreground and the background even if they have similar colors. A real-time extraction system was developed and the experiment proved the proposed method feasible.
View full abstract
-
Toshiki FUJINO, Takashi ITO, Koichi YAMASHITA, Daisuke SUZUKI
Pages
13C-4-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We developed a line flicker noise reduction method for two-dimensional infrared imaging. In our method, the correction value is calculated for each pixel of a line. The proposed method can suppress line fluctuation noise even if the image contains moving subjects.
View full abstract
-
Yoshitaka TOYODA, Daisuke SUZUKI
Pages
13C-5-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We propose a fast registration method for HDR image fusion when multi-exposure images are locally misaligned.
Proposed method is based on a pixel-wise block matching method which abbreviate its searching process according to the misalignment saliency map on a fused image. Proposed method can reproduce a comfortable HDR image suppressing artifacts.
View full abstract
-
Kazuyuki ARAI, Akira KIKKAWA, Kenichiro MASAOKA, Masahiro OSAWA, Hirok ...
Pages
13D-1-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
Image resolution is one of the most important characteristics of our 4K/8K satellite broadcasting channel. The real-time, edge-based, MTF measurement system elucidates the spatial resolution characteristics of UHDTV cameras.
View full abstract
-
Tetsuya HAYASHIDA, Daiichi KOIDE, Jun YONAI, Atsushi ARAI, Shuichi AOK ...
Pages
13D-2-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We developed a compact dense wavelength division multiplexing transceiver that can send uncompressed (144Gbps) full-featured 8K video remotely (up to 70 km) using optical fiber. This can be easily installed in a vehicle and enables 8K signal transmission without image quality degradation and delay even when transmitting from a venue far from the broadcast station or to a public viewing venue.
View full abstract
-
Atsushi ARAI, Tetsuya HAYASHIDA, Daiichi KOIDE, Jun YONAI, Shuichi AOK ...
Pages
13D-3-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We achieved a full-featured 8K production system outside a broadcasting station by mounting a dense wavelength division multiplexing (DWDM) transceiver that could transmit an uncompressed 8K video signal over a long distance and other pieces of 8K/120Hz production equipment in an outside broadcasting van.
View full abstract
-
Kodai KIKUCHI, Takeshi KAJIYAMA, Eiichi MIYASHITA
Pages
13D-4-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We newly developed an 8K/120Hz compression recorder supporting a generic video codec (Apple ProRes) that is widely used in non-linear editing. Using the recorded footage, an 8K/120Hz post-production experiment with common editing equipment was performed. The experiment using the single codec throughout the workflow demonstrated practical and cost-efficient 8K/120Hz editing.
View full abstract
-
Nobuaki HIROSHIMA, Osamu MATSUDA, Mitsuru MOCHIZUKI, Hitoshi SESHIMO
Pages
13E-1-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We propose a method to identify person’s position and direction from POIs specified by expressions such as “shop A is in front of me” or “shop B is on my right” by restricting search area using pedestrian network data and calculating compatibility scores from each pair of the position and the direction.
View full abstract
-
Takahiro MIYAMOTO, Kohei KAJIWARA, Nobuji TETSUTANI
Pages
13E-2-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper describes the Troxler effect in gazing time against contrast images. The Troxler effect appears remarkably when the contrast is 10% or less and the gazing time is 10 seconds or less.
View full abstract
-
Seiya IWASAKI, Hikaru SHIBATA, Nobuji TETSUTANI
Pages
13E-3-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
3D perspective images are evaluated by three evaluation scales (depth feeling, stereoscopic effect, incongruity) . The interesting results were gotten for the incongruity scale.
View full abstract
-
Masanao NAKANO, Tadashi NUNOBIKI
Pages
13E-4-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We propose presence technology by combining multi-layer aerial image and tactile vibration. In this technology, we aim to present the sensation that the aerial image of the living thing moves to the left-and-right and the feeling to localize at a specific place.
View full abstract
-
Shuuhei OKABE, Tetsuo YOSHIMOTO, Haruo NOMA
Pages
13E-5-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
In social implementation of science and technology innovation, the positioning of beneficiaries is important. In the case of this paper, the aim is to show the direction by combining the issue of science and technology social implementation with the point of regional revitalization.
View full abstract
-
Tetsuo YOSHIMOTO
Pages
13E-6-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
In this Paper, considering the improvement of productivity in Japanese manufacturing industry, I will consider
the problem of using digital technology in the construction industry (e.g. i-Constraction).
View full abstract
-
Yutaka KANEKO, Yuta HOSHI, Yasuhiro MURASAKI, Michihiro UEHARA
Pages
14B-1-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper, we propose and evaluate two methods of estimating alternative words of unknown keywords in the
sentences. The proposed methods are based on CBOW (Continuous Bag-of-words) and RNN (Recurrent Neural Network).
View full abstract
-
Yuta HOSHI, Yutaka KANEKO, Yasuhiro MURASAKI, Michihiro UEHARA
Pages
14B-2-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper, we describe a robot control system for the TV watching experiment. We also report about the result of the experiments with our proposed system for the basic survey.
View full abstract
-
Amato TSUJI, Keita USHIDA, Qiu CHEN
Pages
14B-3-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper, the authors propose a platform for manipulating a robot with motion of finger play or hand shadow. By associating the robot’s joint with the user’s bones of the fingers, the motion of the hand and finger is reflected to the robot.
View full abstract
-
Ryohei SAIJO, Takahiro KUSABUKA, Yuichi MAKI, Takuya GODA, Shin-ichiro ...
Pages
14B-4-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We proposed translation display system using portable transparent display, which realizes speakers to see each face
directly and get translated information. In this system, the greater part o f processes is executed in cloud environment. We developed prototype system and investigate the feasibility of proposed system.
View full abstract
-
Ryuichiro Higashinaka, Ryo Ishii, Taichi Katayama, Ryo Masumura, Junji ...
Pages
14B-5-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper proposes to collect emotion-annotated utterances by using role play-based question-answering in which multiple users play the roles of certain characters and respond to questions by online users. We describe the data collection process by role play-based question-answering as well as the experiment we performed to automatically detect emotion tags from utterances.
View full abstract
-
Kazuhiro MORIMOTO, Goji Takeharu ETOH, Kazuhiro SHIMONOMURA
Pages
14C-1-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
A couple of ultra-high-speed image sensors were developed and evaluated: (1) NanoSIS (Nano-second image Sensor) operating at 100 Mfps (t = 10 ns) with about 600 kpixels each with 5 in-pixel memory frames and (10 frames for 300 kpixels), and (2) ISAS (Image Signal Accumulation Sensor) operating at 50 Mfps (t = 20 ns; the deign target) with 32x32 pixels each with 1,220 in-pixel memory frames. The design modification based on the evaluation is going on. Examples of the problems and the countermeasures are reported. For the NanoSIS, the charge handling capacity Qmax was lower than the design target; for the ISAS, the frame rate did not reach the target value. The causes and the countermeasures are explained.
View full abstract
-
Kohei YAMADA, Keita KONDO, Yushi OKURA, Keita YASUTOMI, Tomoya MICHIBA ...
Pages
14C-2-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Yushi OKURA, Keita YASUTOMI, Taishi TAKASAWA, Keiichiro KAGAWA, Shoji ...
Pages
14C-3-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
In this research, we propose a reference plane sampling method of the TOF system which captures a fixed plane using a part of the pixel array as a reference pixel array. By using this method, correlated jitter is canceled, and it becomes possible to obtain further higher range resolution.
View full abstract
-
Hiroki OKAMOTO, Ryohei FUNATSU, Hiroshi SHIMAMOTO
Pages
14C-4-
Published: 2018
Released on J-STAGE: November 04, 2023
CONFERENCE PROCEEDINGS
FREE ACCESS
We propose distance estimation method based on complex wavelet transform. Typically, to estimate distance, many calculations and processing are needed. On the other hand, our method can estimate accurately and low-costly. Therefore, it easily can apply to high-resolution or high-framerate video.
View full abstract