Calibration of few-shot classification tasks : mitigating misconfidence from distribution mismatch퓨샷 분류 태스크 교정: 분포 불일치에 의한 확신도 오류 완화

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 163
  • Download : 0
As numerous meta-learning algorithms improve performance when solving few-shot classification problems for practical applications, accurate prediction of uncertainty has been considered essential. In meta-training, the algorithm treats all the generated tasks equally and updates the model to perform well on the training tasks. For the training, some of the tasks might be hard for the model to infer the query examples from the support examples, especially when a huge mismatch between the support set and the query set exists. The distribution mismatch makes the model have wrong confidences that cause a calibration problem. In this study, we propose a novel meta-training method that measures the distribution mismatch and lets the model predict with more careful confidence. Moreover, our method is algorithm-agnostic and readily expanded to include a range of meta-learning models. Through extensive experiments, including dataset shift, we present that our training strategy helps the model avoid being indiscriminately confident, and thereby, produce calibrated classification results without the loss of accuracy.
Advisors
Yun, Se-Youngresearcher윤세영researcher
Description
한국과학기술원 :AI대학원,
Publisher
한국과학기술원
Issue Date
2021
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : AI대학원, 2021.8,[iv, 31 p. :]

Keywords

Few-Shot Learning▼aMeta-Learning▼aConfidence Calibration▼aDistribution Mismatch▼aTask Uncertainty; 퓨샷 학습▼a메타 학습▼a확신도 교정▼a분포 불일치▼a태스크 불확실성

URI
http://hdl.handle.net/10203/292504
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=963739&flag=dissertation
Appears in Collection
AI-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0