TY - GEN
T1 - Bi-Sep
T2 - 27th International Conference on Technologies and Applications of Artificial Intelligence, TAAI 2022
AU - Ho, Kuan Hsun
AU - Hung, Jeih Weih
AU - Chen, Berlin
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - In recent years, deep neural network (DNN)-based time-domain methods for monaural speech separation have substantially improved under an anechoic condition. However, the performance of these methods degrades when facing harsher conditions, such as noise or reverberation. Although adopting Short-Time Fourier Transform (STFT) for feature extraction of these neural methods helps stabilize the performance in non-anechoic situations, it inherently loses the fine-grained vision, which is one of the particularities of time-domain methods. Therefore, this study explores incorporating time and STFT-domain features to retain their beneficial characteristics. Furthermore, we leverage a Bi-Projection Fusion (BPF) mechanism to merge the information between two domains. To evaluate the effectiveness of our proposed method, we conduct experiments in an anechoic setting on the WSJ0-2mix dataset and noisy/reverberant settings on WHAM!/WHAMR! dataset. The experiment shows that with a cost of ignorable degradation on anechoic dataset, the proposed method manages to promote the performance of existing neural models when facing more complicated environments.
AB - In recent years, deep neural network (DNN)-based time-domain methods for monaural speech separation have substantially improved under an anechoic condition. However, the performance of these methods degrades when facing harsher conditions, such as noise or reverberation. Although adopting Short-Time Fourier Transform (STFT) for feature extraction of these neural methods helps stabilize the performance in non-anechoic situations, it inherently loses the fine-grained vision, which is one of the particularities of time-domain methods. Therefore, this study explores incorporating time and STFT-domain features to retain their beneficial characteristics. Furthermore, we leverage a Bi-Projection Fusion (BPF) mechanism to merge the information between two domains. To evaluate the effectiveness of our proposed method, we conduct experiments in an anechoic setting on the WSJ0-2mix dataset and noisy/reverberant settings on WHAM!/WHAMR! dataset. The experiment shows that with a cost of ignorable degradation on anechoic dataset, the proposed method manages to promote the performance of existing neural models when facing more complicated environments.
KW - SepFormer
KW - bi-projection fusion
KW - cross-domain
KW - multi-resolution
KW - reverberation
KW - speech separation
UR - http://www.scopus.com/inward/record.url?scp=85150055248&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85150055248&partnerID=8YFLogxK
U2 - 10.1109/TAAI57707.2022.00022
DO - 10.1109/TAAI57707.2022.00022
M3 - Conference contribution
AN - SCOPUS:85150055248
T3 - Proceedings - 2022 International Conference on Technologies and Applications of Artificial Intelligence, TAAI 2022
SP - 72
EP - 77
BT - Proceedings - 2022 International Conference on Technologies and Applications of Artificial Intelligence, TAAI 2022
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 1 December 2022 through 3 December 2022
ER -