Multi-scale audio indexing for translingual spoken document retrieval

H. Wang*, H. Meng, P. Schone, B. Chen, W. K. Lo

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

11 Citations (Scopus)


MEI (Mandarin-English Information) is an English-Chinese crosslingual spoken document retrieval (CL-SDR) system developed during the Johns Hopkins University Summer Workshop 2000. We integrate speech recognition, machine translation, and information retrieval technologies to perform CL-SDR. MEI advocates a multi-scale paradigm, where both Chinese words and subwords (characters and syllables) are used in retrieval. The use of subword units can complement the word unit in handling the problems of Chinese word tokenization ambiguity, Chinese homophone ambiguity, and out-of-vocabulary words in audio indexing. This paper focuses on multi-scale audio indexing in MEI. Experiments are based on the Topic Detection and Tracking Corpora (TDT-2 and TDT-3), where we indexed Voice of America Mandarin news broadcasts by speech recognition on both the word and subword scales. In this paper, we discuss the development of the MEI syllable recognizer, the representations of spoken documents using overlapping subword n-grams and lattice structures. Results show that augmenting words with subwords is beneficial to CL-SDR performance.

Original languageEnglish
Pages (from-to)605-608
Number of pages4
JournalICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Publication statusPublished - 2001
Externally publishedYes
Event2001 IEEE International Conference on Acoustics, Speech, and Signal Processing - Salt Lake, UT, United States
Duration: 2001 May 72001 May 11

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering


Dive into the research topics of 'Multi-scale audio indexing for translingual spoken document retrieval'. Together they form a unique fingerprint.

Cite this