人工知能学会研究会資料 言語・音声理解と対話処理研究会
Online ISSN : 2436-4576
Print ISSN : 0918-5682
93回 (2021/11)
会議情報

日本語自然言語処理における事前学習モデルの公開
趙 天雨沢田 慶
著者情報
会議録・要旨集 フリー

p. 169-170

詳細
抄録

We have developed two types of pre-trained models, GPT-2 and RoBERTa, that are trained from a public corpus consisting of about 75-gigabyte texts. The models and its training code have been released under licenses that allow for commercial use. By fine-tuning the released models, users will be able to accomplish a variety of Japanese natural language processing tasks with high task accuracy.

著者関連情報
© 2021(一社)人工知能学会
前の記事 次の記事
feedback
Top