人工知能学会第二種研究会資料
Online ISSN : 2436-5556
Simulating Perception With LLMs as Underpinnings for More Controllable Knowledge Acquisition
Rzepka RafalOkada Kei
著者情報
研究報告書・技術報告書 フリー

2024 年 2024 巻 AGI-028 号 p. 07-

詳細
抄録

In this paper we present experimental results for our idea of using Large Language Models as perception simulators. We utilize our Semantic Primes Prompts dataset containing 49 queries about perceptive values regarding subject and object in simple sentences in Japanese language. We show that LLMs in zero-shot scenario do not yield satisfactory results, but after finetuning, scores improve often approaching human annotators' level, depending on the perception category. For example, we discover that tested models, both proprietary (gpt-4-mini) and open-source (OpenCALM-8B), struggle with estimating motion, touch, frequency of events and quantifiers. After reporting our findings, we discuss possibilities of our approach and possible next steps of our research.

著者関連情報
© 2024 著作者
前の記事
feedback
Top