WHAT DO NEURAL NETWORKS LISTEN TO? EXPLORING THE CRUCIAL BANDS IN SPEECH ENHANCEMENT USING SINC-CONVOLUTION

Kuan Hsun Ho*, Jeih Weih Hung, Berlin Chen

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

1 Citation (Scopus)

Abstract

This study introduces a reformed Sinc-convolution (Sincconv) framework tailored for the encoder component of deep networks for speech enhancement (SE). The reformed Sincconv, based on parametrized sinc functions as band-pass filters, offers notable advantages in terms of training efficiency, filter diversity, and interpretability. The reformed Sinc-conv is evaluated in conjunction with various SE models, showcasing its ability to boost SE performance. Furthermore, the reformed Sincconv provides valuable insights into the specific frequency components that are prioritized in an SE scenario. This opens up a new direction of SE research and improving our knowledge of their operating dynamics.

Original languageEnglish
Pages (from-to)10406-10410
Number of pages5
JournalICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
DOIs
Publication statusPublished - 2024
Event2024 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024 - Seoul, Korea, Republic of
Duration: 2024 Apr 142024 Apr 19

Keywords

  • Interpretability
  • Sinc-convolution
  • Speech Ehancement

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'WHAT DO NEURAL NETWORKS LISTEN TO? EXPLORING THE CRUCIAL BANDS IN SPEECH ENHANCEMENT USING SINC-CONVOLUTION'. Together they form a unique fingerprint.

Cite this