Effective pseudo-relevance feedback for language modeling in extractive speech summarization

Shih Hung Liu, Kuan Yu Chen, Yu Lun Hsieh, Berlin Chen, Hsin Min Wang, Hsu Chun Yen, Wen Lian Hsu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

Extractive speech summarization, aiming to automatically select an indicative set of sentences from a spoken document so as to concisely represent the most important aspects of the document, has become an active area for research and experimentation. An emerging stream of work is to employ the language modeling (LM) framework along with the Kullback-Leibler divergence measure for extractive speech summarization, which can perform important sentence selection in an unsupervised manner and has shown preliminary success. This paper presents a continuation of such a general line of research and its main contribution is two-fold. First, by virtue of pseudo-relevance feedback, we explore several effective sentence modeling formulations to enhance the sentence models involved in the LM-based summarization framework. Second, the utilities of our summarization methods and several widely-used methods are analyzed and compared extensively, which demonstrates the effectiveness of our methods.

Original languageEnglish
Title of host publication2014 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2014
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages3226-3230
Number of pages5
ISBN (Print)9781479928927
DOIs
Publication statusPublished - 2014 Jan 1
Event2014 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2014 - Florence, Italy
Duration: 2014 May 42014 May 9

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
ISSN (Print)1520-6149

Other

Other2014 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2014
CountryItaly
CityFlorence
Period14/5/414/5/9

Fingerprint

Feedback

Keywords

  • Kullback-Leibler divergence
  • Speech summarization
  • language modeling
  • pseudo-relevance feedback

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering

Cite this

Liu, S. H., Chen, K. Y., Hsieh, Y. L., Chen, B., Wang, H. M., Yen, H. C., & Hsu, W. L. (2014). Effective pseudo-relevance feedback for language modeling in extractive speech summarization. In 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2014 (pp. 3226-3230). [6854196] (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICASSP.2014.6854196

Effective pseudo-relevance feedback for language modeling in extractive speech summarization. / Liu, Shih Hung; Chen, Kuan Yu; Hsieh, Yu Lun; Chen, Berlin; Wang, Hsin Min; Yen, Hsu Chun; Hsu, Wen Lian.

2014 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2014. Institute of Electrical and Electronics Engineers Inc., 2014. p. 3226-3230 6854196 (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Liu, SH, Chen, KY, Hsieh, YL, Chen, B, Wang, HM, Yen, HC & Hsu, WL 2014, Effective pseudo-relevance feedback for language modeling in extractive speech summarization. in 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2014., 6854196, ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, Institute of Electrical and Electronics Engineers Inc., pp. 3226-3230, 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2014, Florence, Italy, 14/5/4. https://doi.org/10.1109/ICASSP.2014.6854196
Liu SH, Chen KY, Hsieh YL, Chen B, Wang HM, Yen HC et al. Effective pseudo-relevance feedback for language modeling in extractive speech summarization. In 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2014. Institute of Electrical and Electronics Engineers Inc. 2014. p. 3226-3230. 6854196. (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings). https://doi.org/10.1109/ICASSP.2014.6854196
Liu, Shih Hung ; Chen, Kuan Yu ; Hsieh, Yu Lun ; Chen, Berlin ; Wang, Hsin Min ; Yen, Hsu Chun ; Hsu, Wen Lian. / Effective pseudo-relevance feedback for language modeling in extractive speech summarization. 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2014. Institute of Electrical and Electronics Engineers Inc., 2014. pp. 3226-3230 (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings).
@inproceedings{379e047bbd4043bcac177fa05bad0a32,
title = "Effective pseudo-relevance feedback for language modeling in extractive speech summarization",
abstract = "Extractive speech summarization, aiming to automatically select an indicative set of sentences from a spoken document so as to concisely represent the most important aspects of the document, has become an active area for research and experimentation. An emerging stream of work is to employ the language modeling (LM) framework along with the Kullback-Leibler divergence measure for extractive speech summarization, which can perform important sentence selection in an unsupervised manner and has shown preliminary success. This paper presents a continuation of such a general line of research and its main contribution is two-fold. First, by virtue of pseudo-relevance feedback, we explore several effective sentence modeling formulations to enhance the sentence models involved in the LM-based summarization framework. Second, the utilities of our summarization methods and several widely-used methods are analyzed and compared extensively, which demonstrates the effectiveness of our methods.",
keywords = "Kullback-Leibler divergence, Speech summarization, language modeling, pseudo-relevance feedback",
author = "Liu, {Shih Hung} and Chen, {Kuan Yu} and Hsieh, {Yu Lun} and Berlin Chen and Wang, {Hsin Min} and Yen, {Hsu Chun} and Hsu, {Wen Lian}",
year = "2014",
month = "1",
day = "1",
doi = "10.1109/ICASSP.2014.6854196",
language = "English",
isbn = "9781479928927",
series = "ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "3226--3230",
booktitle = "2014 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2014",

}

TY - GEN

T1 - Effective pseudo-relevance feedback for language modeling in extractive speech summarization

AU - Liu, Shih Hung

AU - Chen, Kuan Yu

AU - Hsieh, Yu Lun

AU - Chen, Berlin

AU - Wang, Hsin Min

AU - Yen, Hsu Chun

AU - Hsu, Wen Lian

PY - 2014/1/1

Y1 - 2014/1/1

N2 - Extractive speech summarization, aiming to automatically select an indicative set of sentences from a spoken document so as to concisely represent the most important aspects of the document, has become an active area for research and experimentation. An emerging stream of work is to employ the language modeling (LM) framework along with the Kullback-Leibler divergence measure for extractive speech summarization, which can perform important sentence selection in an unsupervised manner and has shown preliminary success. This paper presents a continuation of such a general line of research and its main contribution is two-fold. First, by virtue of pseudo-relevance feedback, we explore several effective sentence modeling formulations to enhance the sentence models involved in the LM-based summarization framework. Second, the utilities of our summarization methods and several widely-used methods are analyzed and compared extensively, which demonstrates the effectiveness of our methods.

AB - Extractive speech summarization, aiming to automatically select an indicative set of sentences from a spoken document so as to concisely represent the most important aspects of the document, has become an active area for research and experimentation. An emerging stream of work is to employ the language modeling (LM) framework along with the Kullback-Leibler divergence measure for extractive speech summarization, which can perform important sentence selection in an unsupervised manner and has shown preliminary success. This paper presents a continuation of such a general line of research and its main contribution is two-fold. First, by virtue of pseudo-relevance feedback, we explore several effective sentence modeling formulations to enhance the sentence models involved in the LM-based summarization framework. Second, the utilities of our summarization methods and several widely-used methods are analyzed and compared extensively, which demonstrates the effectiveness of our methods.

KW - Kullback-Leibler divergence

KW - Speech summarization

KW - language modeling

KW - pseudo-relevance feedback

UR - http://www.scopus.com/inward/record.url?scp=84905234272&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84905234272&partnerID=8YFLogxK

U2 - 10.1109/ICASSP.2014.6854196

DO - 10.1109/ICASSP.2014.6854196

M3 - Conference contribution

AN - SCOPUS:84905234272

SN - 9781479928927

T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings

SP - 3226

EP - 3230

BT - 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2014

PB - Institute of Electrical and Electronics Engineers Inc.

ER -