2024 年 32 巻 p. 652-666
The success of Language Models (LMs) on a variety of NLP tasks has prompted the design and analysis of natural language benchmarks to evaluate their fitness for particular applications. In this work, we focus on the NLP task of acceptability rating, whereby a given model must rate the ‘goodness’ of a series of tokens. We find the current commonly used datasets to benchmark for LM sentence acceptability fail to capture the distribution of naturally occurring written data. Moreover, we find that the bias toward shorter (5-8 word) sentences is a strong confounding factor that contributes positively to LMs' performance. We then introduce seven datasets collected from the NLP literature that closely follow the sentence length distribution of naturally occurring written text. In our experiments, when sentence length is controlled by adjusting the distribution to match naturally occurring data, we observe a performance drop for current commonly used datasets of up to 48 points in MCC. We conclude with a discussion on implications for current applications and recommendations to improve our current commonly used acceptability benchmarking datasets.