BP-CRN: A Lightweight Two-Stage Convolutional Recurrent Network for Multi-Channel Speech Enhancement

In our work, we propose a lightweight two-stage convolutional recurrent network (BP-CRN) for multichannel speech enhancement (mcse), which consists of beamforming and post-filtering. Drawing inspiration from traditional methods, we design two core modules for spatial filtering and post-filtering wit...

Full description

Saved in:
Bibliographic Details
Published inIEICE Transactions on Information and Systems Vol. E108.D; no. 2; pp. 161 - 164
Main Authors ZHAO, Li, CHENG, Jiaming, ZHOU, Lin, PANG, Cong, NI, Ye
Format Journal Article
LanguageEnglish
Published Tokyo The Institute of Electronics, Information and Communication Engineers 01.02.2025
Japan Science and Technology Agency
Subjects
Online AccessGet full text
ISSN0916-8532
1745-1361
DOI10.1587/transinf.2024EDL8042

Cover

Loading…
Abstract In our work, we propose a lightweight two-stage convolutional recurrent network (BP-CRN) for multichannel speech enhancement (mcse), which consists of beamforming and post-filtering. Drawing inspiration from traditional methods, we design two core modules for spatial filtering and post-filtering with compensation, named BM and PF, respectively. Both core modules employ a convolutional encoding-decoding structure and utilize complex frequency-time long short-term memory (CFT-LSTM) blocks in the middle. Furthermore, the inter-module mask module is introduced to estimate and convey implicit spatial information and assist the post-filtering module in refining spatial filtering and suppressing residual noise. Experimental results demonstrate that, our proposed method contains only 1.27M parameters and outperforms three other mcse methods in terms of PESQ and STOI metrics.
AbstractList In our work, we propose a lightweight two-stage convolutional recurrent network (BP-CRN) for multichannel speech enhancement (mcse), which consists of beamforming and post-filtering. Drawing inspiration from traditional methods, we design two core modules for spatial filtering and post-filtering with compensation, named BM and PF, respectively. Both core modules employ a convolutional encoding-decoding structure and utilize complex frequency-time long short-term memory (CFT-LSTM) blocks in the middle. Furthermore, the inter-module mask module is introduced to estimate and convey implicit spatial information and assist the post-filtering module in refining spatial filtering and suppressing residual noise. Experimental results demonstrate that, our proposed method contains only 1.27M parameters and outperforms three other mcse methods in terms of PESQ and STOI metrics.
ArticleNumber 2024EDL8042
Author NI, Ye
PANG, Cong
CHENG, Jiaming
ZHOU, Lin
ZHAO, Li
Author_xml – sequence: 1
  fullname: ZHAO, Li
  organization: School of Information Science and Engineering, Southeast University
– sequence: 1
  fullname: CHENG, Jiaming
  organization: School of Information Science and Engineering, Southeast University
– sequence: 1
  fullname: ZHOU, Lin
  organization: School of Information Science and Engineering, Southeast University
– sequence: 1
  fullname: PANG, Cong
  organization: School of Information Science and Engineering, Southeast University
– sequence: 1
  fullname: NI, Ye
  organization: School of Information Science and Engineering, Southeast University
BookMark eNpNkEtPAjEUhRuDiYj-AxdNXI_2yXTc4YCPBB9BXTdt7cjg0GLbkfjvHYKim_tIzndy7zkEPeedBeAEozPMRX6egnKxdtUZQYRNxlOBGNkDfZwznmE6xD3QRwUeZoJTcgAOY1wghAXBvA9eLx-zcnZ_AUdwWr_N09puKnxe--wpqTcLS-8-fdOm2jvVwJk1bQjWJXhv09qHd1j5AO_aJtVZOVfO2QY-raw1czhx3W7sshMfgf1KNdEe__QBeLmaPJc32fTh-rYcTTPDcpIySihRSmilbEF4NaRYa6YNKnRBDFOvGiEisKGG6-5RUXChES2MYUQgLTinA3C69V0F_9HamOTCt6G7O0qK85wSwWjRqdhWZYKPMdhKrkK9VOFLYiQ3ecrfPOW_PDtstsUWcRPMDlIh1aaxf9AEIyHHkuyGP5Od2MxVkNbRb0p6iY8
Cites_doi 10.1109/ASRU51503.2021.9688198
10.1109/TASL.2011.2114881
10.1109/ASRU.2015.7404837
10.1109/ICASSP43922.2022.9746359
10.1023/A:1007515423169
10.1109/LSP.2013.2291240
10.1109/ICASSP.2001.941023
10.21437/Interspeech.2022-159
10.1109/ICASSP40776.2020.9054177
10.1109/ICASSP43922.2022.9746055
10.21437/Interspeech.2016-552
10.1109/LSP.2023.3244428
10.1109/ASRU.2015.7404793
10.21437/Interspeech.2020-2537
10.1109/ICASSP40776.2020.9053989
10.1109/ASRU51503.2021.9688326
10.1109/ICASSP49357.2023.10095770
ContentType Journal Article
Copyright 2025 The Institute of Electronics, Information and Communication Engineers
Copyright Japan Science and Technology Agency 2025
Copyright_xml – notice: 2025 The Institute of Electronics, Information and Communication Engineers
– notice: Copyright Japan Science and Technology Agency 2025
DBID AAYXX
CITATION
7SC
8FD
JQ2
L7M
L~C
L~D
DOI 10.1587/transinf.2024EDL8042
DatabaseName CrossRef
Computer and Information Systems Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Computer and Information Systems Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Advanced Technologies Database with Aerospace
ProQuest Computer Science Collection
Computer and Information Systems Abstracts Professional
DatabaseTitleList Computer and Information Systems Abstracts

DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 1745-1361
EndPage 164
ExternalDocumentID 10_1587_transinf_2024EDL8042
article_transinf_E108_D_2_E108_D_2024EDL8042_article_char_en
GroupedDBID -~X
5GY
ABJNI
ABZEH
ACGFS
ADNWM
AENEX
ALMA_UNASSIGNED_HOLDINGS
CS3
DU5
EBS
EJD
F5P
ICE
JSF
JSH
KQ8
OK1
P2P
RJT
RZJ
TN5
ZKX
AAYXX
CITATION
7SC
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c472t-3232aa8baae925f631bb4bc09b92c4adb00281c3c5b2028958b039cc4280b8553
ISSN 0916-8532
IngestDate Mon Jun 30 12:10:03 EDT 2025
Tue Jul 01 02:54:11 EDT 2025
Wed Sep 03 06:31:00 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 2
Language English
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c472t-3232aa8baae925f631bb4bc09b92c4adb00281c3c5b2028958b039cc4280b8553
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
OpenAccessLink https://www.jstage.jst.go.jp/article/transinf/E108.D/2/E108.D_2024EDL8042/_article/-char/en
PQID 3177328439
PQPubID 2048497
PageCount 4
ParticipantIDs proquest_journals_3177328439
crossref_primary_10_1587_transinf_2024EDL8042
jstage_primary_article_transinf_E108_D_2_E108_D_2024EDL8042_article_char_en
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2025-02-01
PublicationDateYYYYMMDD 2025-02-01
PublicationDate_xml – month: 02
  year: 2025
  text: 2025-02-01
  day: 01
PublicationDecade 2020
PublicationPlace Tokyo
PublicationPlace_xml – name: Tokyo
PublicationTitle IEICE Transactions on Information and Systems
PublicationTitleAlternate IEICE Trans. Inf. & Syst.
PublicationYear 2025
Publisher The Institute of Electronics, Information and Communication Engineers
Japan Science and Technology Agency
Publisher_xml – name: The Institute of Electronics, Information and Communication Engineers
– name: Japan Science and Technology Agency
References [1] J. Li, Y. Zhu, D. Luo, Y. Liu, G. Cui, and Z. Li, “The pcg-aiid system for l3das22 challenge: Mimo and miso convolutional recurrent network for multi channel speech enhancement and speech recognition,” ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.9211-9215, IEEE, 2022. 10.1109/icassp43922.2022.9746055
[16] C.H. Taal, R.C. Hendriks, R. Heusdens, and J. Jensen, “An algorithm for intelligibility prediction of time-frequency weighted noisy speech,” IEEE Trans. Audio, Speech, Language Process., vol.19, no.7, pp.2125-2136, 2011. 10.1109/tasl.2011.2114881
[6] X. Ji, L. Lu, F. Fang, J. Ma, L. Zhu, J. Li, D. Zhao, M. Liu, and F. Jiang, “An end-to-end far-field keyword spotting system with neural beamforming,” 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp.892-899, IEEE, 2021. 10.1109/asru51503.2021.9688326
[17] B. Tolooshams, R. Giri, A.H. Song, U. Isik, and A. Krishnaswamy, “Channel-attention dense u-net for multichannel speech enhancement,” ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.836-840, 2020. 10.1109/icassp40776.2020.9053989
[18] D. Lee and J.-W. Choi, “Deft-an: Dense frequency-time attentive network for multichannel speech enhancement,” IEEE Signal Process. Lett., vol.30, pp.155-159, 2023. 10.1109/lsp.2023.3244428
[9] J. Li, A. Mohamed, G. Zweig, and Y. Gong, “Lstm time and frequency recurrence for automatic speech recognition,” 2015 IEEE workshop on automatic speech recognition and understanding (ASRU), pp.187-191, IEEE, 2015. 10.1109/asru.2015.7404793
[19] J. Cheng, C. Pang, R. Liang, J. Fan, and L. Zhao, “Dual-path dilated convolutional recurrent network with group attention for multi-channel speech enhancement,” ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.1-2, 2023. 10.1109/icassp49357.2023.10095770
[7] A. Li, G. Yu, C. Zheng, and X. Li, “Taylorbeamformer: Learning all-neural beamformer for multi-channel speech enhancement from taylor’s approximation theory,” Interspeech, pp.5413-5417, 2022. 10.21437/interspeech.2022-159
[13] H.S. Choi, J.H. Kim, J. Huh, A. Kim, J.W. Ha, and K. Lee, “Phase-aware speech enhancement with deep complex u-net,” arXiv preprint arXiv:1903.03107, 2019.
[4] Y. Hu, Y. Liu, S. Lv, M. Xing, S. Zhang, Y. Fu, J. Wu, B. Zhang, and L. Xie, “Dccrn: Deep complex convolution recurrent network for phase-aware speech enhancement,” arXiv preprint arXiv:2008. 00264, 2020.
[8] Y. Luo, Z. Chen, N. Mesgarani, and T. Yoshioka, “End-to-end microphone permutation and number invariant multi-channel speech separation,” ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.6394-6398, IEEE, 2020. 10.1109/icassp40776.2020.9054177
[14] J. Barker, R. Marxer, E. Vincent, and S. Watanabe, “The third ‘chime’ speech separation and recognition challenge: Dataset, task and baselines,” 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pp.504-511, IEEE, 2015. 10.1109/asru.2015.7404837
[11] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Empirical evaluation of gated recurrent neural networks on sequence modeling,” arXiv preprint arXiv:1412.3555, 2014.
[10] R. Gu, S.-X. Zhang, M. Yu, and D. Yu, “3d spatial features for multi-channel target speech separation,” 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp.996-1002, IEEE, 2021. 10.1109/asru51503.2021.9688198
[15] A.W. Rix, J.G. Beerends, M.P. Hollier, and A.P. Hekstra, “Perceptual evaluation of speech quality (pesq)-a new method for speech quality assessment of telephone networks and codecs,” 2001 IEEE international conference on acoustics, speech, and signal processing. Proceedings (Cat. no.01CH37221), vol.2, pp.749-752, IEEE, 2001. 10.1109/icassp.2001.941023
[12] E. Bauer and R. Kohavi, “An empirical comparison of voting classification algorithms: Bagging, boosting, and variants,” Machine learning, vol.36, pp.105-139, 1999. 10.1023/a:1007515423169
[3] Y. Xu, J. Du, L.-R. Dai, and C.-H. Lee, “An experimental study on speech enhancement based on deep neural networks,” IEEE Signal Process. Lett., vol.21, no.1, pp.65-68, 2013. 10.1109/lsp.2013.2291240
[2] X. Xu, R. Gu, and Y. Zou, “Improving dual-microphone speech enhancement by learning cross-channel features with multi-head attention,” ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.6492-6496, IEEE, 2022. 10.1109/icassp43922.2022.9746359
[5] H. Erdogan, J.R. Hershey, S. Watanabe, M.I. Mandel, and J.L. Roux, “Improved mvdr beamforming using single-channel mask prediction networks.,” Interspeech, pp.1981-1985, 2016. 10.21437/interspeech.2016-552
11
12
13
14
15
16
17
18
19
1
2
3
4
5
6
7
8
9
10
References_xml – reference: [10] R. Gu, S.-X. Zhang, M. Yu, and D. Yu, “3d spatial features for multi-channel target speech separation,” 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp.996-1002, IEEE, 2021. 10.1109/asru51503.2021.9688198
– reference: [14] J. Barker, R. Marxer, E. Vincent, and S. Watanabe, “The third ‘chime’ speech separation and recognition challenge: Dataset, task and baselines,” 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pp.504-511, IEEE, 2015. 10.1109/asru.2015.7404837
– reference: [16] C.H. Taal, R.C. Hendriks, R. Heusdens, and J. Jensen, “An algorithm for intelligibility prediction of time-frequency weighted noisy speech,” IEEE Trans. Audio, Speech, Language Process., vol.19, no.7, pp.2125-2136, 2011. 10.1109/tasl.2011.2114881
– reference: [3] Y. Xu, J. Du, L.-R. Dai, and C.-H. Lee, “An experimental study on speech enhancement based on deep neural networks,” IEEE Signal Process. Lett., vol.21, no.1, pp.65-68, 2013. 10.1109/lsp.2013.2291240
– reference: [8] Y. Luo, Z. Chen, N. Mesgarani, and T. Yoshioka, “End-to-end microphone permutation and number invariant multi-channel speech separation,” ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.6394-6398, IEEE, 2020. 10.1109/icassp40776.2020.9054177
– reference: [15] A.W. Rix, J.G. Beerends, M.P. Hollier, and A.P. Hekstra, “Perceptual evaluation of speech quality (pesq)-a new method for speech quality assessment of telephone networks and codecs,” 2001 IEEE international conference on acoustics, speech, and signal processing. Proceedings (Cat. no.01CH37221), vol.2, pp.749-752, IEEE, 2001. 10.1109/icassp.2001.941023
– reference: [7] A. Li, G. Yu, C. Zheng, and X. Li, “Taylorbeamformer: Learning all-neural beamformer for multi-channel speech enhancement from taylor’s approximation theory,” Interspeech, pp.5413-5417, 2022. 10.21437/interspeech.2022-159
– reference: [5] H. Erdogan, J.R. Hershey, S. Watanabe, M.I. Mandel, and J.L. Roux, “Improved mvdr beamforming using single-channel mask prediction networks.,” Interspeech, pp.1981-1985, 2016. 10.21437/interspeech.2016-552
– reference: [4] Y. Hu, Y. Liu, S. Lv, M. Xing, S. Zhang, Y. Fu, J. Wu, B. Zhang, and L. Xie, “Dccrn: Deep complex convolution recurrent network for phase-aware speech enhancement,” arXiv preprint arXiv:2008. 00264, 2020.
– reference: [1] J. Li, Y. Zhu, D. Luo, Y. Liu, G. Cui, and Z. Li, “The pcg-aiid system for l3das22 challenge: Mimo and miso convolutional recurrent network for multi channel speech enhancement and speech recognition,” ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.9211-9215, IEEE, 2022. 10.1109/icassp43922.2022.9746055
– reference: [6] X. Ji, L. Lu, F. Fang, J. Ma, L. Zhu, J. Li, D. Zhao, M. Liu, and F. Jiang, “An end-to-end far-field keyword spotting system with neural beamforming,” 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp.892-899, IEEE, 2021. 10.1109/asru51503.2021.9688326
– reference: [12] E. Bauer and R. Kohavi, “An empirical comparison of voting classification algorithms: Bagging, boosting, and variants,” Machine learning, vol.36, pp.105-139, 1999. 10.1023/a:1007515423169
– reference: [2] X. Xu, R. Gu, and Y. Zou, “Improving dual-microphone speech enhancement by learning cross-channel features with multi-head attention,” ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.6492-6496, IEEE, 2022. 10.1109/icassp43922.2022.9746359
– reference: [17] B. Tolooshams, R. Giri, A.H. Song, U. Isik, and A. Krishnaswamy, “Channel-attention dense u-net for multichannel speech enhancement,” ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.836-840, 2020. 10.1109/icassp40776.2020.9053989
– reference: [18] D. Lee and J.-W. Choi, “Deft-an: Dense frequency-time attentive network for multichannel speech enhancement,” IEEE Signal Process. Lett., vol.30, pp.155-159, 2023. 10.1109/lsp.2023.3244428
– reference: [9] J. Li, A. Mohamed, G. Zweig, and Y. Gong, “Lstm time and frequency recurrence for automatic speech recognition,” 2015 IEEE workshop on automatic speech recognition and understanding (ASRU), pp.187-191, IEEE, 2015. 10.1109/asru.2015.7404793
– reference: [11] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Empirical evaluation of gated recurrent neural networks on sequence modeling,” arXiv preprint arXiv:1412.3555, 2014.
– reference: [13] H.S. Choi, J.H. Kim, J. Huh, A. Kim, J.W. Ha, and K. Lee, “Phase-aware speech enhancement with deep complex u-net,” arXiv preprint arXiv:1903.03107, 2019.
– reference: [19] J. Cheng, C. Pang, R. Liang, J. Fan, and L. Zhao, “Dual-path dilated convolutional recurrent network with group attention for multi-channel speech enhancement,” ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.1-2, 2023. 10.1109/icassp49357.2023.10095770
– ident: 10
  doi: 10.1109/ASRU51503.2021.9688198
– ident: 16
  doi: 10.1109/TASL.2011.2114881
– ident: 14
  doi: 10.1109/ASRU.2015.7404837
– ident: 2
  doi: 10.1109/ICASSP43922.2022.9746359
– ident: 11
– ident: 12
  doi: 10.1023/A:1007515423169
– ident: 3
  doi: 10.1109/LSP.2013.2291240
– ident: 15
  doi: 10.1109/ICASSP.2001.941023
– ident: 7
  doi: 10.21437/Interspeech.2022-159
– ident: 13
– ident: 8
  doi: 10.1109/ICASSP40776.2020.9054177
– ident: 1
  doi: 10.1109/ICASSP43922.2022.9746055
– ident: 5
  doi: 10.21437/Interspeech.2016-552
– ident: 18
  doi: 10.1109/LSP.2023.3244428
– ident: 9
  doi: 10.1109/ASRU.2015.7404793
– ident: 4
  doi: 10.21437/Interspeech.2020-2537
– ident: 17
  doi: 10.1109/ICASSP40776.2020.9053989
– ident: 6
  doi: 10.1109/ASRU51503.2021.9688326
– ident: 19
  doi: 10.1109/ICASSP49357.2023.10095770
SSID ssj0018215
Score 2.3705676
Snippet In our work, we propose a lightweight two-stage convolutional recurrent network (BP-CRN) for multichannel speech enhancement (mcse), which consists of...
SourceID proquest
crossref
jstage
SourceType Aggregation Database
Index Database
Publisher
StartPage 161
SubjectTerms Beamforming
complex network
convolutional recurrent network
Encoding-Decoding
lightweight
Modules
multichannel speech enhancement
neural beamforming
Spatial data
Spatial filtering
Speech processing
Title BP-CRN: A Lightweight Two-Stage Convolutional Recurrent Network for Multi-Channel Speech Enhancement
URI https://www.jstage.jst.go.jp/article/transinf/E108.D/2/E108.D_2024EDL8042/_article/-char/en
https://www.proquest.com/docview/3177328439
Volume E108.D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
ispartofPNX IEICE Transactions on Information and Systems, 2025/02/01, Vol.E108.D(2), pp.161-164
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Lb9NAEF6FwgEOPAqIQEF74FZtsddre80tJK6StoSHEqniYnnXTiuEXERcKvGH-JvMPu1UEaJcLGc1Wa09n2dnZueB0GvYISrQqjkRsH8TFpcJKZNwRbJKhEwGkq8yleD8fp5Ml-zoND4dDH73opYuW3Egf23NK_kfrsIY8FVlyd6As35SGIB74C9cgcNw_Scev_tIxp_nJrn8RFnZV9rRub-4uiCgRZ6plnTNT7sEXUVf2nJMcxP9rYMMdQ4uUWkGTf1NNaSv5fl-3pwrPPi4GKu_zvLZOFd9JVyTcX3aYKuvti60ed0rg67E7sgkRsFizrzzeaaFv4fVeJoboiOVad7RfZl-WFrfQd9BQWMX09x5GsOEgFZghG5t5GzKYhJGpg67E8R5GPCDSQ90tCdZQ0tb219sq_yPlQflUL8FGAfzn7J8csIDRrv9zp3xX9sGfXCiMotgnsLNUvRmuYVu0zTV8QDHn7rjKk5Nqwz3pDZHE2Z5s20tGzrQna_rVtd3uKYLaAVn8RDdt5YJHhmYPUKDutlFD1zXD2w3gV10r1fC8jGqDAbf4hHuIRB7BOINBGKPQGwRiAE5eAOB2CAQ9xD4BC0P88V4SmznDiJZSlsSgZ5ellyUZZ3ReJVEoRBMyCATGZWsrLSpH8pIxoKqo-6YiyDKpARbOBA8jqOnaKe5aOpnCGecrdJE0FUgGFPBOiXovCKoKsFpXYloiIh7l8V3U6Cl-BsHh-jYvHBPbT_fjlrhsJgU1N90__bEKicSRNAQ7TmuFVYwrAtQyVUJLFD1n99wcS_Q3e4L2kM77Y_L-iUova14pQH3B_1urD0
linkProvider Colorado Alliance of Research Libraries
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=BP-CRN%3A+A+Lightweight+Two-Stage+Convolutional+Recurrent+Network+for+Multi-Channel+Speech+Enhancement&rft.jtitle=IEICE+transactions+on+information+and+systems&rft.au=PANG%2C+Cong&rft.au=NI%2C+Ye&rft.au=CHENG%2C+Jiaming&rft.au=ZHOU%2C+Lin&rft.date=2025-02-01&rft.issn=0916-8532&rft.eissn=1745-1361&rft.volume=E108.D&rft.issue=2&rft.spage=161&rft.epage=164&rft_id=info:doi/10.1587%2Ftransinf.2024EDL8042&rft.externalDBID=n%2Fa&rft.externalDocID=10_1587_transinf_2024EDL8042
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0916-8532&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0916-8532&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0916-8532&client=summon