主催: 人工知能学会
会議名: 第102回言語・音声理解と対話処理研究会
回次: 102
開催地: 国立国語研究所 講堂
開催日: 2024/11/28 - 2024/11/29
p. 212-217
Recent advancements in large language models (LLMs) have demonstrated promising capabilities in simulating and supporting human decision-making; in this study, "simulating one's decision" means obtaining an expected response by a specific target in advance, given a question seeking a decision. Current approaches to simulating a target (specific person) often involve requesting extensive personal information from the user and using this information alongside situational context (scenario description or conversational context) to make decisions. However, for most targets (excluding well-known figures or fictional characters), users are often unable to provide sufficient personal information for simulation. If we could identify the most critical information (namely, relevant information) required to answer each decision-seeking utterance (e.g. "Would you like to form a band with me?"), the user would only need to dynamically offer a smaller, more focused subset for each request.In this paper, we investigate the ability of LLMs to recognize the relevant information necessary for decision-making in conversational contexts. Using daily drama scripts as our initial dialogue dataset, we first extract a pair of decision-seeking utterances and their responses (decisions) as target utterances for evaluation. We then asked the LLM to generate the requirement for relevant information to respond to the decision-seeking utterances. The generated requirement was evaluated through (1) comparing with human annotation and (2) indirect evaluation of decision-making response simulations, using manually collected relevant information based on the requirement.