-
Article type: Cover
Pages
Cover1-
Published: December 05, 2014
Released on J-STAGE: September 22, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Article type: Index
Pages
Toc1-
Published: December 05, 2014
Released on J-STAGE: September 22, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Sho TAKAHASHI, Miki HASEYAMA
Article type: Article
Session ID: ME2014-104
Published: December 05, 2014
Released on J-STAGE: September 22, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper proposes a link analysis-based method for detecting important players and similar scenes in soccer videos. We define important players as follows: 1) the attacking player who have great relevancy to a score, 2) the defending player on the opposing team, and 3) players who assist the above players. Since soccer tactic analysis focuses not only on player skill but also relationships between players, this paper expresses the relationships between players as a network, which is constructed from player positions in the soccer video. The proposed method analyses the constructed network to detect important players and similar scenes.
View full abstract
-
Masaki TAKAHASHI, Toshiyuki NAKAMURA, Tomoyuki MISHINA
Article type: Article
Session ID: ME2014-105
Published: December 05, 2014
Released on J-STAGE: September 22, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Tracking a ball in football video sequences is a difficult problem due to unpredictable motions and the small area of the ball in image coordinates. We propose a novel ball tracking method in football videos by using a machine learning algorithm. Tracking results from several viewpoints are collected and the method accurately estimates final ball position in real 3D coordinates. Experimental results showed that the method can robustly track a ball in real-time.
View full abstract
-
Ayumi MATSUMOTO, Dan MIKAMI, Harumi KAWAMURA, Akira KOJIMA
Article type: Article
Session ID: ME2014-106
Published: December 05, 2014
Released on J-STAGE: September 22, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper, we propose a framework for making the form of the classification based on the image feature. We aim to assist the operation learning expert knowledge is not trainee and leaders for operation. In a general video feedback system used for motor learning, in order to determine the specific practices policy from the presented information, it requires specialized knowledge for its operation. So, we automatically classified into several classes of the operation based on the image feature amount of the motor form (e.g., Batting form, hurdling form), propose a system that presents appropriate exercise/teaching methods for each class. As a first step towards the realization, we propose a framework for performing a form classification based on the image feature amounts, and further, to perform a form classification experiment using four kinds of image feature amount, effective image feature I was confirmed the amount experimentally.
View full abstract
-
Mitsugu KAKUTA, Itaru KITAHARA, Tetsunari NISHIYAMA
Article type: Article
Session ID: ME2014-107
Published: December 05, 2014
Released on J-STAGE: September 22, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Shinji OZAWA
Article type: Article
Session ID: ME2014-108
Published: December 05, 2014
Released on J-STAGE: September 22, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In the field of sports, form analysis and tactical analysis are required. Until now, visual observation and analysis of the video images have been used, in recent years, image analysis techniques have been widely used. Also in the field of video content came to be widely be superimposed results by processing the live-action video. In this paper, some cases of image processing technology, aiming the sports video production, tactical analysis and form analysis, are described.
View full abstract
-
Kensuke HISATOMI, Masanori KANO, Kensuke IKEYA, Miwa KATAYAMA, Tomoyuk ...
Article type: Article
Session ID: ME2014-109
Published: December 05, 2014
Released on J-STAGE: September 22, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper proposes a depth estimation method using a camera array that consists of an infrared projector and a pair of infrared color cameras. The projector projects an infrared dot pattern, and the infrared color cameras simultaneously capture infrared images with the pattern and color images without the pattern. A depth map is estimated after a cost volume filtering by using Cross-based Local Multipoint Filter (CLMF) is processed to the cost volume calculated from a pair of the infrared images. The color images are used as guide images in the cost volume filtering. The graphcut is also introduced for each scan-line in deciding disparities, so that the accuracy of depth estimation from wide-baseline stereo images improves. We estimated the depth map from images captured in the actual world and the efficiency of the proposed method is demonstrated.
View full abstract
-
Wenjuan Wang, Yihsin Ho, Kan Okubo, Norio Tagawa, Takao Nishitani
Article type: Article
Session ID: ME2014-110
Published: December 05, 2014
Released on J-STAGE: September 22, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In the foreground segmentation method based on the Gaussian mixture model, it is assumed that the target required to be detected moves in a image sequence. Hence, when the target stands still, it is learned as a background. After that, when the targets begins to move again, the image region at which the target stood before is apt to be determined as a foreground by mistake. In the proposed method, we examine the movement of the regions detected as a foreground between successive frames, and remove the regions judged to have no movements as an afterimage. Its effectiveness is confirmed through experiments using real image sequences.
View full abstract
-
Article type: Appendix
Pages
App1-
Published: December 05, 2014
Released on J-STAGE: September 22, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Article type: Appendix
Pages
App2-
Published: December 05, 2014
Released on J-STAGE: September 22, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Article type: Appendix
Pages
App3-
Published: December 05, 2014
Released on J-STAGE: September 22, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS