A Comparative Study on Multichannel Speaker-Attributed Automatic Speech Recognition in Multi-party Meetings
Speaker-attributed automatic speech recognition (SA-ASR) in multi-party meeting scenarios is one of the most valuable and challenging ASR task. It was shown that single-channel frame-level diarization with serialized output training (SC-FD-SOT), single-channel word-level diarization with SOT (SC-WD-...
Saved in:
Main Authors | , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
01.11.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Speaker-attributed automatic speech recognition (SA-ASR) in multi-party
meeting scenarios is one of the most valuable and challenging ASR task. It was
shown that single-channel frame-level diarization with serialized output
training (SC-FD-SOT), single-channel word-level diarization with SOT
(SC-WD-SOT) and joint training of single-channel target-speaker separation and
ASR (SC-TS-ASR) can be exploited to partially solve this problem. In this
paper, we propose three corresponding multichannel (MC) SA-ASR approaches,
namely MC-FD-SOT, MC-WD-SOT and MC-TS-ASR. For different tasks/models,
different multichannel data fusion strategies are considered, including
channel-level cross-channel attention for MC-FD-SOT, frame-level cross-channel
attention for MC-WD-SOT and neural beamforming for MC-TS-ASR. Results on the
AliMeeting corpus reveal that our proposed models can consistently outperform
the corresponding single-channel counterparts in terms of the speaker-dependent
character error rate. |
---|---|
DOI: | 10.48550/arxiv.2211.00511 |