Improved speech summarization with multiple-hypothesis representations and Kullback-Leibler divergence measures

Shih Hsiang Lin*, Berlin Chen

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

23 Citations (Scopus)

Abstract

Imperfect speech recognition often leads to degraded performance when leveraging existing text-based methods for speech summarization. To alleviate this problem, this paper investigates various ways to robustly represent the recognition hypotheses of spoken documents beyond the top scoring ones. Moreover, a new summarization method stemming from the Kullback-Leibler (KL) divergence measure and exploring both the sentence and document relevance information is proposed to work with such robust representations. Experiments on broadcast news speech summarization seem to demonstrate the utility of the presented approaches.

Original languageEnglish
Pages (from-to)1847-1850
Number of pages4
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Publication statusPublished - 2009
Event10th Annual Conference of the International Speech Communication Association, INTERSPEECH 2009 - Brighton, United Kingdom
Duration: 2009 Sept 62009 Sept 10

Keywords

  • KL divergence
  • Multiple recognition hypotheses
  • Relevance information
  • Speech summarization

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Signal Processing
  • Software
  • Sensory Systems

Fingerprint

Dive into the research topics of 'Improved speech summarization with multiple-hypothesis representations and Kullback-Leibler divergence measures'. Together they form a unique fingerprint.

Cite this