Article ID: 2024EDP7234
The pervasive dissemination of multimodal fake news, which includes both textual and visual elements, significantly confuses the public. Previous studies have addressed this issue by promoting multimodal fusion for fake news detection. They focus on multimodal fusion but neglect the importance of fine-grained data analysis, the effective information contained in unimodal data is not fully mined resulting in underutilization of data, especially. At the same time, with the increase of the confusion of fake news, the lack of background knowledge also brings trouble to the fake news detection task. To address these limitations, we introduce FKGFND (Fine-grained Knowledge Graph enhanced Multi-modal Fake News Detection), a novel framework designed to make full use of uni-modal data through detailed data modeling, and introduce external knowledge to provide background knowledge and discrimination basis for the model to realize fine-grained fake news detection. Initially, we model image information at a fine-grade level, extracting embedded text and character details and integrating background knowledge about the characters. Concurrently, we construct a ternary knowledge graph to optimize the use of extracted data, featuring nodes representing embedded text, character names, and the background information. Subsequently, we augment the effectiveness of uni-modal data by enriching multimodal data integration with refined uni-modal information. To enhance the accuracy of multi-modal fake news detection, we developed FKGFND-data, a dataset founded on a fine-grained knowledge graph. Experimental evaluations indicate that FKGFND outperforms existing approaches in both multi-modal and uni-modal fake news detection tasks.