Facing Off World Model Backbones: RNNs, Transformers, and S4

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 103
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorDeng, Feiko
dc.contributor.authorPark, Junyeongko
dc.contributor.authorAhn, Sungjinko
dc.date.accessioned2023-11-30T01:02:49Z-
dc.date.available2023-11-30T01:02:49Z-
dc.date.created2023-11-09-
dc.date.issued2023-12-12-
dc.identifier.citationThe Thirty-seventh Conference on Neural Information Processing Systems, NeurIPS 2023-
dc.identifier.urihttp://hdl.handle.net/10203/315452-
dc.description.abstractWorld models are a fundamental component in model-based reinforcement learning (MBRL). To perform temporally extended and consistent simulations of the future in partially observable environments, world models need to possess long-term memory. However, state-of-the-art MBRL agents, such as Dreamer, predominantly employ recurrent neural networks (RNNs) as their world model backbone, which have limited memory capacity. In this paper, we seek to explore alternative world model backbones for improving long-term memory. In particular, we investigate the effectiveness of Transformers and Structured State Space Sequence (S4) models, motivated by their remarkable ability to capture long-range dependencies in low-dimensional sequences and their complementary strengths. We propose S4WM, the first world model compatible with parallelizable SSMs including S4 and its variants. By incorporating latent variable modeling, S4WM can efficiently generate high-dimensional image sequences through latent imagination. Furthermore, we extensively compare RNN-, Transformer-, and S4-based world models across four sets of environments, which we have tailored to assess crucial memory capabilities of world models, including long-term imagination, context-dependent recall, reward prediction, and memory-based reasoning. Our findings demonstrate that S4WM outperforms Transformer-based world models in terms of long-term memory, while exhibiting greater efficiency during training and imagination. These results pave the way for the development of stronger MBRL agents.-
dc.languageEnglish-
dc.publisherThe Conference on Neural Information Processing Systems-
dc.titleFacing Off World Model Backbones: RNNs, Transformers, and S4-
dc.typeConference-
dc.type.rimsCONF-
dc.citation.publicationnameThe Thirty-seventh Conference on Neural Information Processing Systems, NeurIPS 2023-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationNew Orleans Ernest N. Morial Convention Center-
dc.contributor.localauthorAhn, Sungjin-
dc.contributor.nonIdAuthorDeng, Fei-
dc.contributor.nonIdAuthorPark, Junyeong-
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0