The Neural Information Processing Systems Foundation

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 377
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLee, Su Youngko
dc.contributor.authorChoi, Sungikko
dc.contributor.authorChung, Sae-Youngko
dc.date.accessioned2019-12-13T07:35:01Z-
dc.date.available2019-12-13T07:35:01Z-
dc.date.created2019-11-24-
dc.date.issued2019-12-10-
dc.identifier.citationNeurIPS 2019-
dc.identifier.urihttp://hdl.handle.net/10203/268944-
dc.description.abstractWe propose Episodic Backward Update (EBU) – a novel deep reinforcement learning algorithm with a direct value propagation. In contrast to the conventional use of the experience replay with uniform random sampling, our agent samples a whole episode and successively propagates the value of a state to its previous states. Our computationally efficient recursive algorithm allows sparse and delayed rewards to propagate directly through all transitions of the sampled episode. We theoretically prove the convergence of the EBU method and experimentally demonstrate its performance in both deterministic and stochastic environments. Especially in 49 games of Atari 2600 domain, EBU achieves the same mean and median human normalized performance of DQN by using only 5% and 10% of samples, respectively.-
dc.languageEnglish-
dc.publisherThe Neural Information Processing Systems Foundation-
dc.titleThe Neural Information Processing Systems Foundation-
dc.typeConference-
dc.type.rimsCONF-
dc.citation.publicationnameNeurIPS 2019-
dc.identifier.conferencecountryCN-
dc.identifier.conferencelocationVancouver Convention Centre-
dc.contributor.localauthorChung, Sae-Young-
dc.contributor.nonIdAuthorLee, Su Young-
dc.contributor.nonIdAuthorChoi, Sungik-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0