Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
39th (2025)
Session ID : 3H4-OS-10b-04
Conference information

Fine-tuning Large Language Model with Epilepsy Medical Knowledge
*Xuyang ZHAOQibin ZHAOToshihisa TANAKA
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

The large language model (LLM) has demonstrated their powerful performance in a variety of fields. To further improve the performance of a LLM in a specific field, fine-tuning is a common method. LLM in the medical field is often fine-tuned using general medical knowledge to improve performance, but when the model is faced with a specific disease, the model responses are not completely accurate and can sometimes be completely irrelevant. In this work, we focus on a specific disease (epilepsy), fine-tuning the pre-trained model using data from the epilepsy field. The epilepsy data includes the basic knowledge of the disease, conventional treatment plans, and commonly used drugs, as well as precautions in daily life, etc. In the experiment, a variety of evaluation methods are used to compare the fine-tuned model with the pre-trained model. From the results, the performance of the fine-tuned model has been greatly improved.

Content from these authors
© 2025 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top