Multi-armed bandit with additional observations

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 156
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorYun, Donggyuko
dc.contributor.authorAhn, Sumyeongko
dc.contributor.authorProutiere, Alexandreko
dc.contributor.authorShin, Jinwooko
dc.contributor.authorYi, Yungko
dc.date.accessioned2018-12-20T02:13:29Z-
dc.date.available2018-12-20T02:13:29Z-
dc.date.created2018-11-28-
dc.date.created2018-11-28-
dc.date.issued2018-06-18-
dc.identifier.citation2018 ACM International Conference on Measurement and Modeling of Computer Systems, SIGMETRICS 2018, pp.53 - 55-
dc.identifier.urihttp://hdl.handle.net/10203/247468-
dc.description.abstractWe study multi-armed bandit (MAB) problems with additional observations, where in each round, the decision maker selects an arm to play and can also observe rewards of additional arms (within a given budget) by paying certain costs. We propose algorithms that are asymptotic-optimal and order-optimal in their regrets under the settings of stochastic and adversarial rewards, respectively.-
dc.languageEnglish-
dc.publisherAssociation for Computing Machinery, Inc-
dc.titleMulti-armed bandit with additional observations-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85052012891-
dc.type.rimsCONF-
dc.citation.beginningpage53-
dc.citation.endingpage55-
dc.citation.publicationname2018 ACM International Conference on Measurement and Modeling of Computer Systems, SIGMETRICS 2018-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationIrvine, California-
dc.identifier.doi10.1145/3219617.3219639-
dc.contributor.localauthorYun, Donggyu-
dc.contributor.nonIdAuthorAhn, Sumyeong-
dc.contributor.nonIdAuthorProutiere, Alexandre-
dc.contributor.nonIdAuthorShin, Jinwoo-
dc.contributor.nonIdAuthorYi, Yung-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0