Abstract
Imperfect speech recognition often leads to degraded performance when exploiting conventional text-based methods for speech summarization. To alleviate this problem, this paper investigates various ways to robustly represent the recognition hypotheses of spoken documents beyond the top scoring ones. Moreover, a summarization framework, building on the KullbackLeibler (KL) divergence measure and exploring both the relevance and topical information cues of spoken documents and sentences, is presented to work with such robust representations. Experiments on broadcast news speech summarization tasks appear to demonstrate the utility of the presented approaches.
Original language | English |
---|---|
Article number | 5549862 |
Pages (from-to) | 871-882 |
Number of pages | 12 |
Journal | IEEE Transactions on Audio, Speech and Language Processing |
Volume | 19 |
Issue number | 4 |
DOIs | |
Publication status | Published - 2011 Apr 6 |
Keywords
- KullbackLeibler (KL) -divergence
- multiple recognition hypotheses
- relevance information
- speech summarization
- topical information
ASJC Scopus subject areas
- Acoustics and Ultrasonics
- Electrical and Electronic Engineering