人工知能学会研究会資料 言語・音声理解と対話処理研究会
Online ISSN : 2436-4576
Print ISSN : 0918-5682
102回 (2024/11)
会議情報

Can LLMs Identify Information Required to Respond to Decision-Seeking Utterances in Dialogue?
Su Weiwen吉永 直樹豊田 正史
著者情報
会議録・要旨集 フリー

p. 212-217

詳細
抄録

Recent advancements in large language models (LLMs) have demonstrated promising capabilities in simulating and supporting human decision-making; in this study, "simulating one's decision" means obtaining an expected response by a specific target in advance, given a question seeking a decision. Current approaches to simulating a target (specific person) often involve requesting extensive personal information from the user and using this information alongside situational context (scenario description or conversational context) to make decisions. However, for most targets (excluding well-known figures or fictional characters), users are often unable to provide sufficient personal information for simulation. If we could identify the most critical information (namely, relevant information) required to answer each decision-seeking utterance (e.g. "Would you like to form a band with me?"), the user would only need to dynamically offer a smaller, more focused subset for each request.In this paper, we investigate the ability of LLMs to recognize the relevant information necessary for decision-making in conversational contexts. Using daily drama scripts as our initial dialogue dataset, we first extract a pair of decision-seeking utterances and their responses (decisions) as target utterances for evaluation. We then asked the LLM to generate the requirement for relevant information to respond to the decision-seeking utterances. The generated requirement was evaluated through (1) comparing with human annotation and (2) indirect evaluation of decision-making response simulations, using manually collected relevant information based on the requirement.

著者関連情報
© 2024 人工知能学会
前の記事 次の記事
feedback
Top