Multi-domain Knowledge Distillation via Uncertainty-Matching for End-to-End ASR Models

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 256
  • Download : 0
Knowledge Distillation basically matches predictive distributions of student and teacher networks to improve performance in an environment with model capacity and/or data constraints. However, it is well known that predictive distribution of neural networks not only tends to be overly confident, but also cannot directly model various factors properly that contribute to uncertainty. Recently, deep learning studies based on uncertainty have been successful in various fields, especially in several computer vision tasks. The prediction probability can implicitly show the information about how confident the network is, however, we can explicitly utilize confidence of the output by modeling the uncertainty of the network. In this paper, we propose a novel knowledge distillation method for automatic speech recognition that directly models and transfers the uncertainty inherent in data observation such as speaker variations or confusing pronunciations. Moreover, we investigate an effect of transferring knowledge more effectively using multiple teachers learned from various domains. Evaluated on WSJ which is the standard benchmark dataset with limited instances, the proposed knowledge distillation method achieves significant improvements over student baseline models.
Publisher
International Speech Communication Association
Issue Date
2021-09-01
Language
English
Citation

INTERSPEECH 2021, pp.1311 - 1315

ISSN
2308-457X
DOI
10.21437/Interspeech.2021-1169
URI
http://hdl.handle.net/10203/291819
Appears in Collection
RIMS Conference Papers
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0