Recently, large language models have been considered for various applications.
In this study, we examined the possibility of using the GPT (Generative Pre-Trained Transformer) model as a dental education model by measuring the amount of knowledge in the field of dentistry by having the model solve the National Dental Examination. The questions from the 114th to 116th national examinations were answered on GPT3.5 and GPT4, and the percentage of correct answers was compared with the passing standard. The percentage of correct answers by field was also calculated. GPT3.5 could not reach the passing standard in all areas, and GPT4 reached the passing standard in the required and A areas, but not in B and C areas. In addition, the percentage of correct answers to general medicine questions was high, but the percentage of correct answers to dental field questions was low. These results suggest that GPT3.5 and GPT4 are not suitable as dental education models.
View full abstract