人工知能学会研究会資料 言語・音声理解と対話処理研究会
Online ISSN : 2436-4576
Print ISSN : 0918-5682
100回 (2024/02)
会議情報

ChatGPT Summarization: A Deep Dive into In-Context Learning Efficacy
袁 培傑大野 正樹橋本 泰一
著者情報
会議録・要旨集 認証あり

p. 13-19

詳細
抄録

Large language models (LLMs), such as ChatGPT, have risen to prominence in text summarization tasks, primarily due to the advent of in-context learning. This paper delves into how in-context learning steers the outputs of LLMs based on different data demonstration configurations. Our pivotal findings reveal that ChatGPT's adaptability to target summarization tasks is enhanced when provided with paired text and summaries compared to when provided in isolation. Furthermore, the structured presentation of these pairs proves more influential than their precise content alignment. However, there are observable limitations: increasing the number of demonstrations yields diminishing returns, and the improvement of adaptability declines when tasked with more intricate news texts as opposed to simpler dialogues. This study comprehensively explains in-context learning's nuances in text summarization, highlighting its merits and demerits for future researchers.

著者関連情報
© 2024 人工知能学会
前の記事 次の記事
feedback
Top