Host: The Japanese Society for Artificial Intelligence
Name : 34th Annual Conference, 2020
Number : 34
Location : Online
Date : June 09, 2020 - June 12, 2020
In this paper, we point out the problem that BERT is domain dependent, and propose to construct the domain specific pre-training model by using fine-tuning. In particular, parameters of a DistilBERT model are initialized by a trained BERT model, and then they are tuned from the specific domain corpus. As a result, we can efficiently construct the domain specific DistilBERT model. In the experiment, we make the test set for each domain, which is the estimation of a masked word in a sentence. By this test set, we evaluate the domain specific DistilBERT model by comparing with the general BERT model, and show the superiority of our proposed model.