Leveraging relevance cues for language modeling in speech recognition

Berlin Chen*, Kuan Yu Chen

*此作品的通信作者

研究成果: 雜誌貢獻期刊論文同行評審

16 引文 斯高帕斯(Scopus)

摘要

Language modeling (LM), providing a principled mechanism to associate quantitative scores to sequences of words or tokens, has long been an interesting yet challenging problem in the field of speech and language processing. The n-gram model is still the predominant method, while a number of disparate LM methods, exploring either lexical co-occurrence or topic cues, have been developed to complement the n-gram model with some success. In this paper, we explore a novel language modeling framework built on top of the notion of relevance for speech recognition, where the relationship between a search history and the word being predicted is discovered through different granularities of semantic context for relevance modeling. Empirical experiments on a large vocabulary continuous speech recognition (LVCSR) task seem to demonstrate that the various language models deduced from our framework are very comparable to existing language models both in terms of perplexity and recognition error rate reductions.

原文英語
頁(從 - 到)807-816
頁數10
期刊Information Processing and Management
49
發行號4
DOIs
出版狀態已發佈 - 2013

ASJC Scopus subject areas

  • 資訊系統
  • 媒體技術
  • 電腦科學應用
  • 管理科學與經營研究
  • 圖書館與資訊科學

指紋

深入研究「Leveraging relevance cues for language modeling in speech recognition」主題。共同形成了獨特的指紋。

引用此