With the rapid growth of multimedia information, the ability to efficiently manage data from large amount of multimedia database has become a crucial issue. In this paper, a framework for music emotion detection is proposed. First, a Thayer's 2-Dimentinal model that represents the music emotion space is employed as our emotion model. Second, three features such as intensity, rhythm regularity, and tempo are extracted to describe a music clip. Then, features are trained by constructing Gaussian Mixture Models (GMM). Finally, the likelihood radios of test music clips to GMM are calculated for emotion identification. Experiemtal results show that the average recall and precision all are up to 80% for the database that is comprised of 145 music clips.