Residual Neural Processes

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 178
  • Download : 0
A Neural Process (NP) is a map from a set of observed input-output pairs to a predictive distribution over functions, which is designed to mimic other stochastic processes' inference mechanisms. NPs are shown to work effectively in tasks that require complex distributions, where traditional stochastic processes struggle, e.g. image completion tasks. This paper concerns the practical capacity of set function approximators despite their universality. By delving deeper into the relationship between an NP and a Bayesian last layer (BLL), it is possible to see that NPs may struggle in simple examples, which other stochastic processes can easily solve. In this paper, we propose a simple yet effective remedy; the Residual Neural Process (RNP) that leverages traditional BLL for faster training and better prediction. We demonstrate that the RNP shows faster convergence and better performance, both qualitatively and quantitatively.
Publisher
Association for the Advancement of Artificial Intelligence
Issue Date
2020-02-11
Language
English
Citation

34th AAAI Conference on Artificial Intelligence (AAAI 2020), pp.4545 - 4552

ISSN
2159-5399
URI
http://hdl.handle.net/10203/276832
Appears in Collection
CS-Conference Papers(학술회의논문)RIMS Conference Papers
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0