Employing median filtering to enhance the complex-valued acoustic spectrograms in modulation domain for noise-robust speech recognition

Hsin Ju Hsieh, Berlin Chen, Jeih Weih Hung

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

In this paper, we propose to employ median filtering (MF) to the modulation domain of the complex-valued acoustic spectrogram in order to alleviate the noise effect in speech signals and thereby improve noise robustness. Median filtering is well known by its excellent capability of removing the speckle noise in data while preserving the embedded sharp contrasts. When median filtering is applied to the temporal modulation spectrum, which is the Fourier transform for either of real and imaginary acoustic spectrograms along the time axis, we find that the mismatch caused by noise can be significantly reduced, and the resulting speech features can be more noise-robust and provide better accuracy for speech recognition in comparison with the original unprocessed features. More particularly, the proposed method possesses three explicit merits. First, via the median filtering operation, the outliers residing in the modulation spectrum probably caused by noise can be substantially alleviated. Second, by virtue of the individual processing of real and imaginary acoustic spectrograms, the proposed method will not experience the knotty problem of speech-noise cross-term that usually exists in the conventional acoustic spectral enhancement methods, especially when the noise reduction process is inevitable. Third, the median filtering process is completely unsupervised and requires no prior information about the clean speech and noise. All of the evaluation experiments are conducted on the two databases, the connected-digit Aurora-2 database and the median-vocabulary Aurora-4 database. According to the recognition results, we demonstrate that the proposed MF-based method can achieve performance competitive to or better than many state-of-the-art noise robustness methods, including histogram equalization (HEQ), mean and variance normalization (MVN), MVN plus ARMA filtering (MVA), temporal structure normalization (TSN) and advanced front-end (AFE).

Original languageEnglish
Title of host publicationProceedings of 2016 10th International Symposium on Chinese Spoken Language Processing, ISCSLP 2016
EditorsHsin-Min Wang, Qingzhi Hou, Yuan Wei, Tan Lee, Jianguo Wei, Lei Xie, Hui Feng, Jianwu Dang, Jianwu Dang
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781509042937
DOIs
Publication statusPublished - 2017 May 2
Event10th International Symposium on Chinese Spoken Language Processing, ISCSLP 2016 - Tianjin, China
Duration: 2016 Oct 172016 Oct 20

Publication series

NameProceedings of 2016 10th International Symposium on Chinese Spoken Language Processing, ISCSLP 2016

Other

Other10th International Symposium on Chinese Spoken Language Processing, ISCSLP 2016
CountryChina
CityTianjin
Period16/10/1716/10/20

    Fingerprint

Keywords

  • Automatic speech recognition
  • Feature extraction
  • Median filter
  • Modulation spectrum
  • Noise robustness
  • Principal component analysis

ASJC Scopus subject areas

  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Linguistics and Language

Cite this

Hsieh, H. J., Chen, B., & Hung, J. W. (2017). Employing median filtering to enhance the complex-valued acoustic spectrograms in modulation domain for noise-robust speech recognition. In H-M. Wang, Q. Hou, Y. Wei, T. Lee, J. Wei, L. Xie, H. Feng, J. Dang, & J. Dang (Eds.), Proceedings of 2016 10th International Symposium on Chinese Spoken Language Processing, ISCSLP 2016 [7918403] (Proceedings of 2016 10th International Symposium on Chinese Spoken Language Processing, ISCSLP 2016). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ISCSLP.2016.7918403