Multi-Instrument Automatic Music Transcription with Self-Attention-Based Instance Segmentation

Yu Te Wu, Berlin Chen, Li Su*

*此作品的通信作者

研究成果: 雜誌貢獻期刊論文同行評審

41 引文 斯高帕斯(Scopus)

摘要

Multi-instrument automatic music transcription (AMT) is a critical but less investigated problem in the field of music information retrieval (MIR). With all the difficulties faced by traditional AMT research, multi-instrument AMT needs further investigation on high-level music semantic modeling, efficient training methods for multiple attributes, and a clear problem scenario for system performance evaluation. In this article, we propose a multi-instrument AMT method, with signal processing techniques specifying pitch saliency, novel deep learning techniques, and concepts partly inspired by multi-object recognition, instance segmentation, and image-to-image translation in computer vision. The proposed method is flexible for all the sub-tasks in multi-instrument AMT, including multi-instrument note tracking, a task that has rarely been investigated before. State-of-the-art performance is also reported in the sub-task of multi-pitch streaming.

原文英語
文章編號9222310
頁(從 - 到)2796-2809
頁數14
期刊IEEE/ACM Transactions on Audio Speech and Language Processing
28
DOIs
出版狀態已發佈 - 2020

ASJC Scopus subject areas

  • 電腦科學(雜項)
  • 聲學與超音波
  • 計算數學
  • 電氣與電子工程

指紋

深入研究「Multi-Instrument Automatic Music Transcription with Self-Attention-Based Instance Segmentation」主題。共同形成了獨特的指紋。

引用此