Monotonic multihead attention via mutually activating heads for online automatic speech recognition모노토닉 멀티헤드 어텐션의 헤드-싱크로너스 디코딩 학습을 통한 실시간 음성인식 기법

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 111
  • Download : 0
Despite the feature of real-time decoding, Monotonic Multihead Attention (MMA) shows comparable performance to the state-of-the-art offline methods in machine translation and automatic speech recognition (ASR) tasks. However, the latency of MMA is still a major issue in ASR and should be combined with a technique that can reduce the test latency at inference time, such as head-synchronous beam search decoding, which forces all non-activated heads to activate after a small fixed delay from the first head activation. In this paper, we remove the discrepancy between training and test phases by considering, in the training of MMA, the interactions across multiple heads that will occur in the test time. Specifically, we derive the expected alignments from monotonic attention by considering the boundaries of other heads and reflect them in the learning process. We validate our proposed method on the two standard benchmark datasets for ASR and show that our consistently trained version of MMA provides a better trade-off between quality and latency.
Advisors
Yang, Eunhoresearcher양은호researcher
Description
한국과학기술원 :전산학부,
Publisher
한국과학기술원
Issue Date
2021
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전산학부, 2021.2,[iv, 25 p. :]

Keywords

Online Speech Recognition▼aTransformer▼aMonotonic Multihead Attention▼aHead-Synchronous Beam Search Decoding▼aAutomatic Speech Recognition; 실시간 음성인식▼a트랜스포머▼a모노토닉 멀티 헤드 어텐션▼a헤드-싱크로너스 빔 서치 디코딩▼a음성인식

URI
http://hdl.handle.net/10203/296130
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=948454&flag=dissertation
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0