Host: The Japanese Society for Artificial Intelligence
Name : The 38th Annual Conference of the Japanese Society for Artificial Intelligence
Number : 38
Location : [in Japanese]
Date : May 28, 2024 - May 31, 2024
For data-driven materials design, it is important to construct a database by extracting experimental results from literature. The challenge is to speed up machine learning model customization for information extraction. In this study, we focused on large language models (LLMs) such as GPT-4, which can perform various tasks without additional training data. For the evaluation, we used the ChEMU2020 dataset for extracting information from patent related to chemical experiments. GPT-4 showed a high F1 score of 0.61 even with zero shots, but information extraction requiring domain knowledge, such as "catalyst," was difficult. Fine tuning SciBERT, which is specialized for scientific papers, using the low-rank adaptation, improved the F1 score to 0.71 even with a small amount of training data. These results suggest that an approach to fine-tune domain-specific models by correcting the LLM output to produce a small amount of training data is effective in speeding up model development.