人工知能学会全国大会論文集
Online ISSN : 2758-7347
36th (2022)
セッションID: 2S5-IS-2c-02
会議情報

Symbolic piano music understanding from large-scale pre-training
*YINGFENG FUYusuke TANIMURAHidemoto NAKADA
著者情報
キーワード: pre-training, music understanding, NLP
会議録・要旨集 フリー

詳細
抄録

Pre-training driven by a vast amount of data has shown great power in natural language understanding. The existing works using pretraining for symbolic music are not general enough to tackle all the tasks in musical information retrieval. To make up for the insufficiency and compare it with the existing works, we employed a BERT-like masked language pre-training approach to train a stacked Music Transformer on polyphonic piano MIDI files from the MAESTRO dataset. Then we finetuned our pre-trained model on several symbolic music understanding tasks. In our current work in progress, we complemented several note-level tasks, including next token prediction, melody extraction, velocity prediction, and chord recognition. And we compared our model with the previous works.

著者関連情報
© 2022 The Japanese Society for Artificial Intelligence
前の記事 次の記事
feedback
Top