Spatially-Aware Transformers for Embodied Agents

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 83
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorCho, Junmoko
dc.contributor.authorYoon, Jasikko
dc.contributor.authorAhn, Sungjinko
dc.date.accessioned2024-06-18T05:00:25Z-
dc.date.available2024-06-18T05:00:25Z-
dc.date.created2024-06-18-
dc.date.created2024-06-18-
dc.date.issued2024-05-09-
dc.identifier.citationThe Twelfth International Conference on Learning Representations-
dc.identifier.urihttp://hdl.handle.net/10203/319830-
dc.description.abstractEpisodic memory plays a crucial role in various cognitive processes, such as the ability to mentally recall past events. While cognitive science emphasizes the significance of spatial context in the formation and retrieval of episodic memory, the current primary approach to implementing episodic memory in AI systems is through transformers that store temporally ordered experiences, which overlooks the spatial dimension. As a result, it is unclear how the underlying structure could be extended to incorporate the spatial axis beyond temporal order alone and thereby what benefits can be obtained. To address this, this paper explores the use of Spatially-Aware Transformer models that incorporate spatial information. These models enable the creation of place-centric episodic memory that considers both temporal and spatial dimensions. Adopting this approach, we demonstrate that memory utilization efficiency can be improved, leading to enhanced accuracy in various place-centric downstream tasks. Additionally, we propose the Adaptive Memory Allocator, a memory management method based on reinforcement learning that aims to optimize efficiency of memory utilization. Our experiments demonstrate the advantages of our proposed model in various environments and across multiple downstream tasks, including prediction, generation, reasoning, and reinforcement learning. The source code for our models and experiments will be available at \href{https://github.com/spatiallyawaretransformer}{https://github.com/spatiallyawaretransformer}.-
dc.languageEnglish-
dc.publisherThe International Conference on Learning Representations (ICLR)-
dc.titleSpatially-Aware Transformers for Embodied Agents-
dc.typeConference-
dc.type.rimsCONF-
dc.citation.publicationnameThe Twelfth International Conference on Learning Representations-
dc.identifier.conferencecountryAU-
dc.identifier.conferencelocationVienna-
dc.contributor.localauthorAhn, Sungjin-
dc.contributor.nonIdAuthorCho, Junmo-
dc.contributor.nonIdAuthorYoon, Jasik-
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0