REMAX: Relational Representation for Multi-Agent Exploration

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 155
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorRyu, Heechangko
dc.contributor.authorShin, Hayongko
dc.contributor.authorPark, Jinkyooko
dc.date.accessioned2022-08-26T08:00:14Z-
dc.date.available2022-08-26T08:00:14Z-
dc.date.created2022-07-14-
dc.date.created2022-07-14-
dc.date.issued2022-05-09-
dc.identifier.citationAutonomous Agents and Multiagent Systems (AAMAS-20221), pp.1137 - 1145-
dc.identifier.issn1548-8403-
dc.identifier.urihttp://hdl.handle.net/10203/298144-
dc.description.abstracta sparse reward is generally difficult because numerous combinations of interactions among agents induce a certain outcome (i.e., success or failure). Earlier studies have tried to resolve this issue by employing an intrinsic reward to induce interactions that are helpful for learning an effective policy. However, this approach requires extensive prior knowledge for designing an intrinsic reward. To train the MARL model effectively without designing the intrinsic reward, we propose a learning-based exploration strategy to generate the initial states of a game. The proposed method adopts a variational graph autoencoder to represent a game state such that (1) the state can be compactly encoded to a latent representation by considering relationships among agents, and (2) the latent representation can be used as an effective input for a coupled surrogate model to predict an exploration score. The proposed method then finds new latent representations that maximize the exploration scores and decodes these representations to generate initial states from which the MARL model starts training in the game and thus experiences novel and rewardable states. We demonstrate that our method improves the training and performance of the MARL model more than the existing exploration methods.-
dc.languageEnglish-
dc.publisherInternational Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)-
dc.titleREMAX: Relational Representation for Multi-Agent Exploration-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85134323524-
dc.type.rimsCONF-
dc.citation.beginningpage1137-
dc.citation.endingpage1145-
dc.citation.publicationnameAutonomous Agents and Multiagent Systems (AAMAS-20221)-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationVirtual-
dc.identifier.doi10.5555/3535850-
dc.contributor.localauthorShin, Hayong-
dc.contributor.localauthorPark, Jinkyoo-
Appears in Collection
IE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0