Uncertainty calibration in deep learning딥러닝의 불확실성 보정에 관한 연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 205
  • Download : 0
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators. However, they are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions. The problem of overconfidence becomes especially apparent in cases where the test-time data distribution differs from that which was seen during training. We propose a solution to this problem by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels. Our method results in a better calibrated network and is agnostic to the underlying model structure, so it can be applied to any neural network which produces a probability density as an output. We demonstrate the effectiveness of our method and validate its performance on both classification and regression problems, applying it to recent probabilistic neural network models.
Advisors
Hwang, Sung Juresearcher황성주researcher
Description
한국과학기술원 :전산학부,
Publisher
한국과학기술원
Issue Date
2021
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전산학부, 2021.8,[iii, 21 p. :]

Keywords

Deep learning▼aBayesian inference▼aUncertainty calibration▼aNeural networks▼aSafe AI; 딥러닝▼a베이지안 추론▼a불확실성 추론▼a인공신경망▼a안전한 인공지능

URI
http://hdl.handle.net/10203/296161
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=963378&flag=dissertation
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0