Although Large Language Models (LLMs) are powerful for information extraction, their outputs often contain errors and noise, hindering their use in professional applications that demand high reliability. We are developing a knowledge extraction support tool that integrates human oversight with LLM capabilities. The core of our approach is to separate and then coordinate the LLM's flexible "semantic interpretation" with a rigid "syntactic analysis." First, the LLM interprets the text to generate candidate "extraction patterns" capturing key semantic relationships. Second, the user interactively selects the optimal patterns. Finally, a syntactic parser extracts information with high precision by strictly adhering to the selected patterns and the text's syntactic structure. This tool facilitates the extraction of verifiable and strictly text-grounded knowledge.
There have been attempts such as CodeWorkout and CodeNet to assign semantic labels to program code.In this study, inspired by these works, we propose a knowledge graph that hepls characterize the achievement level of learners' programs within a programming learning tool.By using this knowledge graph, we attempt to organize the difficulty of problems-posed in programming classes-and the stumbling points of learners, based on the problems and their model answers.
Many entity linking methods rely on relationships and contextual information beyond the class hierarchy. However, because the quantity and quality of such information vary significantly across knowledge bases, it is not always readily available. In contrast, the class hierarchy structure is core information within knowledge bases, particularly knowledge graphs. As a result, the variation in its quantity and quality across different knowledge bases is relatively small. This research proposes an entity linking method that leverages class hierarchies, aiming to provide a baseline approach for cases where contextual information is limited. The evaluation was conducted using the DaMuEL Japanese dataset. The results confirmed that the proposed method improves linking accuracy compared to conventional approaches, especially in cases where multiple candidateentities are present.
As LLM-based agents become more integrated into real-world workflows, ensuringaccurate task execution through collaboration is a key challenge. To address this, applyingRetrieval-Augmented Generation (RAG) with knowledge graphs (KGs) is particularly promising.However, existing KGs are often incomplete or unreliable in specific task domains, limiting theireffective- ness in supporting accurate decision-making. We tackle this limitation with a methodbuilt on three key components: (1) Autonomous extrac- tion of structured, domain-specific knowledgefrom inter-agent discussion logs; (2) Integration of this knowledge into a shared, evolvingKG; and (3) Autonomous refinement of the KG by LLM agents during task execu- tion, ensuringconsistency and enabling real-time decision-making. Preliminary results show that our method improvesrelation accuracy from 79% to 97%, highlighting its effectiveness. Future work will explorehow refined knowledge enhances task performance.
In this paper, we propose a causal relationship extraction method that utilises AI multi-agents and large language models (LLMs) as the underlying technology for causal knowledge graph generation. Conventional rule-based and statistical methods have limitations in accurately extracting causal relationships due to the inherent ambiguity and complexity of natural language sentence structures. The system proposed in this study employs a multi-agent architecture that integrates multiple LLMs to address this issue. Specifically, it improves the reliability of extraction by following a multi-stage process in which one LLM verifies the causal relationships extracted by another LLM.