Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
38th (2024)
Session ID : 3Xin2-57
Conference information

Data extraction method from patents with small amount of training data for data-driven materials design
*Masafumi TSUYUKIShotaro AGATSUMAKazuo MUTO
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

For data-driven materials design, it is important to construct a database by extracting experimental results from literature. The challenge is to speed up machine learning model customization for information extraction. In this study, we focused on large language models (LLMs) such as GPT-4, which can perform various tasks without additional training data. For the evaluation, we used the ChEMU2020 dataset for extracting information from patent related to chemical experiments. GPT-4 showed a high F1 score of 0.61 even with zero shots, but information extraction requiring domain knowledge, such as "catalyst," was difficult. Fine tuning SciBERT, which is specialized for scientific papers, using the low-rank adaptation, improved the F1 score to 0.71 even with a small amount of training data. These results suggest that an approach to fine-tune domain-specific models by correcting the LLM output to produce a small amount of training data is effective in speeding up model development.

Content from these authors
© 2024 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top