TY - GEN
T1 - Novel word embedding and translation-based language modeling for extractive speech summarization
AU - Chen, Kuan Yu
AU - Liu, Shih Hung
AU - Chen, Berlin
AU - Wang, Hsin Min
AU - Chen, Hsin Hsi
N1 - Publisher Copyright:
© 2016 ACM.
PY - 2016/10/1
Y1 - 2016/10/1
N2 - Word embedding methods revolve around learning continuous distributed vector representations of words with neural networks, which can capture semantic and/or syntactic cues, and in turn be used to induce similarity measures among words, sentences and documents in context. Celebrated methods can be categorized as prediction-based and count-based methods according to the training objectives and model architectures. Their pros and cons have been extensively analyzed and evaluated in recent studies, but there is relatively less work continuing the line of research to develop an enhanced learning method that brings together the advantages of the two model families. In addition, the interpretation of the learned word representations still remains somewhat opaque. Motivated by the observations and considering the pressing need, this paper presents a novel method for learning the word representations, which not only inherits the advantages of classic word embedding methods but also offers a clearer and more rigorous interpretation of the learned word representations. Built upon the proposed word embedding method, we further formulate a translation-based language modeling framework for the extractive speech summarization task. A series of empirical evaluations demonstrate the effectiveness of the proposed word representation learning and language modeling techniques in extractive speech summarization.
AB - Word embedding methods revolve around learning continuous distributed vector representations of words with neural networks, which can capture semantic and/or syntactic cues, and in turn be used to induce similarity measures among words, sentences and documents in context. Celebrated methods can be categorized as prediction-based and count-based methods according to the training objectives and model architectures. Their pros and cons have been extensively analyzed and evaluated in recent studies, but there is relatively less work continuing the line of research to develop an enhanced learning method that brings together the advantages of the two model families. In addition, the interpretation of the learned word representations still remains somewhat opaque. Motivated by the observations and considering the pressing need, this paper presents a novel method for learning the word representations, which not only inherits the advantages of classic word embedding methods but also offers a clearer and more rigorous interpretation of the learned word representations. Built upon the proposed word embedding method, we further formulate a translation-based language modeling framework for the extractive speech summarization task. A series of empirical evaluations demonstrate the effectiveness of the proposed word representation learning and language modeling techniques in extractive speech summarization.
KW - Interpretation
KW - Language model
KW - Representation
KW - Speech summarization
KW - Word embedding
UR - http://www.scopus.com/inward/record.url?scp=84994618663&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84994618663&partnerID=8YFLogxK
U2 - 10.1145/2964284.2967246
DO - 10.1145/2964284.2967246
M3 - Conference contribution
AN - SCOPUS:84994618663
T3 - MM 2016 - Proceedings of the 2016 ACM Multimedia Conference
SP - 377
EP - 381
BT - MM 2016 - Proceedings of the 2016 ACM Multimedia Conference
PB - Association for Computing Machinery, Inc
T2 - 24th ACM Multimedia Conference, MM 2016
Y2 - 15 October 2016 through 19 October 2016
ER -