An efficient emotion detection scheme for popular music

Chia Hung Yeh*, Hung Hsuan Lin, Hsuan Ting Chang

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

18 Citations (Scopus)

Abstract

With the rapid growth of multimedia information, the ability to efficiently manage data from large amount of multimedia database has become a crucial issue. In this paper, a framework for music emotion detection is proposed. First, a Thayer's 2-Dimentinal model that represents the music emotion space is employed as our emotion model. Second, three features such as intensity, rhythm regularity, and tempo are extracted to describe a music clip. Then, features are trained by constructing Gaussian Mixture Models (GMM). Finally, the likelihood radios of test music clips to GMM are calculated for emotion identification. Experiemtal results show that the average recall and precision all are up to 80% for the database that is comprised of 145 music clips.

Original languageEnglish
Title of host publication2009 IEEE International Symposium on Circuits and Systems, ISCAS 2009
Pages1799-1802
Number of pages4
DOIs
Publication statusPublished - 2009
Externally publishedYes
Event2009 IEEE International Symposium on Circuits and Systems, ISCAS 2009 - Taipei, Taiwan
Duration: 2009 May 242009 May 27

Publication series

NameProceedings - IEEE International Symposium on Circuits and Systems
ISSN (Print)0271-4310

Other

Other2009 IEEE International Symposium on Circuits and Systems, ISCAS 2009
Country/TerritoryTaiwan
CityTaipei
Period2009/05/242009/05/27

Keywords

  • Gaussian mixture model
  • Music emotion detection
  • Thayer's model

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'An efficient emotion detection scheme for popular music'. Together they form a unique fingerprint.

Cite this