JSAI Technical Report, Type 2 SIG
Online ISSN : 2436-5556
Simulating Perception With LLMs as Underpinnings for More Controllable Knowledge Acquisition
Rafal RZEPKAKei OKADA
Author information
RESEARCH REPORT / TECHNICAL REPORT FREE ACCESS

2024 Volume 2024 Issue AGI-028 Pages 07-

Details
Abstract

In this paper we present experimental results for our idea of using Large Language Models as perception simulators. We utilize our Semantic Primes Prompts dataset containing 49 queries about perceptive values regarding subject and object in simple sentences in Japanese language. We show that LLMs in zero-shot scenario do not yield satisfactory results, but after finetuning, scores improve often approaching human annotators' level, depending on the perception category. For example, we discover that tested models, both proprietary (gpt-4-mini) and open-source (OpenCALM-8B), struggle with estimating motion, touch, frequency of events and quantifiers. After reporting our findings, we discuss possibilities of our approach and possible next steps of our research.

Content from these authors
© 2024 Authors
Previous article
feedback
Top