Journal of Natural Language Processing
Online ISSN : 2185-8314
Print ISSN : 1340-7619
ISSN-L : 1340-7619
General Paper (Peer-Reviewed)
Toward Enhancing Reasoning Capabilities of LLMs: An Approach via Synthetic Logic Corpus
Terufumi MorishitaGaku MorioAtsuki YamaguchiYasuhiro Sogawa
Author information
JOURNAL FREE ACCESS

2025 Volume 32 Issue 2 Pages 520-571

Details
Abstract

Large language models (LLMs) are capable of solving a wide range of tasks, yet they have struggled with reasoning. To address this, we propose Additional Logic Training (ALT), which aims to enhance LLMs’ reasoning capabilities by program-generated logical reasoning samples. We first establish principles for designing high-quality samples by integrating symbolic logic theory and previous empirical insights. Then, based on these principles, we construct a synthetic corpus named Formal Logic Deduction Diverse (FLD×𝟚). Finally, we empirically show that ALT on FLD×𝟚 substantially enhances the reasoning capabilities of state-of-the-art LLMs, including LLaMA-3.1-70B. Improvements include gains of up to 30 points on logical reasoning benchmarks, up to 10 points on math and coding benchmarks, and 5 points on the benchmark suite BBH.

Content from these authors
© 2025 The Association for Natural Language Processing
Previous article Next article
feedback
Top