ITE Technical Report
Online ISSN : 2424-1970
Print ISSN : 1342-6893
ISSN-L : 1342-6893
41.05 Multi-media Storage(MMS)/Consumer Electronics(CE)/Human Information(HI)/Media Engineering(ME)/Artistic Image Technology(AIT)
Session ID : MMS2017-10
Conference information

A Note on Accurate Extraction of Concept Subsumption Relationships Using Tagged Images
*Shota HAMANOTakahiro OGAWAMiki HASEYAMA
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract
This paper presents a method for accurate extraction of concept relationships using tagged images. Previous methods extract concept relationships using either or both of visual features and textual features extracted from tagged images. In the method that we have previously proposed, visual similarity and textual similarity are calculated based on kernel density estimation and word2vec, respectively. Although kernel density estimation considers distributions of the visual features, there is still room for accuracy improvement of concept relationship extraction. In this paper, we utilize locality-constraint linear coding (LLC) to achieve accurate extraction of concept relationships, which is robust to visual variations. The proposed method also utilizes GloVe, which reportedly represents concepts more effectively than word2vec in the field of natural language processing. Experimental results show that LLC and GloVe contribute to effective representation of concepts and improve the accuracy of the subsequent extraction of the concept relationships
Content from these authors
© 2017 The Institute of Image Information and Television Engineers
Previous article Next article
feedback
Top