主催: 人工知能学会
会議名: 第100回言語・音声理解と対話処理研究会
回次: 100
開催地: 国立国語研究所 講堂
開催日: 2024/02/29 - 2024/03/01
p. 13-19
Large language models (LLMs), such as ChatGPT, have risen to prominence in text summarization tasks, primarily due to the advent of in-context learning. This paper delves into how in-context learning steers the outputs of LLMs based on different data demonstration configurations. Our pivotal findings reveal that ChatGPT's adaptability to target summarization tasks is enhanced when provided with paired text and summaries compared to when provided in isolation. Furthermore, the structured presentation of these pairs proves more influential than their precise content alignment. However, there are observable limitations: increasing the number of demonstrations yields diminishing returns, and the improvement of adaptability declines when tasked with more intricate news texts as opposed to simpler dialogues. This study comprehensively explains in-context learning's nuances in text summarization, highlighting its merits and demerits for future researchers.