DC Field | Value | Language |
---|---|---|
dc.contributor.author | Cho, Junmo | ko |
dc.contributor.author | Yoon, Jasik | ko |
dc.contributor.author | Ahn, Sungjin | ko |
dc.date.accessioned | 2024-06-18T05:00:25Z | - |
dc.date.available | 2024-06-18T05:00:25Z | - |
dc.date.created | 2024-06-18 | - |
dc.date.created | 2024-06-18 | - |
dc.date.issued | 2024-05-09 | - |
dc.identifier.citation | The Twelfth International Conference on Learning Representations | - |
dc.identifier.uri | http://hdl.handle.net/10203/319830 | - |
dc.description.abstract | Episodic memory plays a crucial role in various cognitive processes, such as the ability to mentally recall past events. While cognitive science emphasizes the significance of spatial context in the formation and retrieval of episodic memory, the current primary approach to implementing episodic memory in AI systems is through transformers that store temporally ordered experiences, which overlooks the spatial dimension. As a result, it is unclear how the underlying structure could be extended to incorporate the spatial axis beyond temporal order alone and thereby what benefits can be obtained. To address this, this paper explores the use of Spatially-Aware Transformer models that incorporate spatial information. These models enable the creation of place-centric episodic memory that considers both temporal and spatial dimensions. Adopting this approach, we demonstrate that memory utilization efficiency can be improved, leading to enhanced accuracy in various place-centric downstream tasks. Additionally, we propose the Adaptive Memory Allocator, a memory management method based on reinforcement learning that aims to optimize efficiency of memory utilization. Our experiments demonstrate the advantages of our proposed model in various environments and across multiple downstream tasks, including prediction, generation, reasoning, and reinforcement learning. The source code for our models and experiments will be available at \href{https://github.com/spatiallyawaretransformer}{https://github.com/spatiallyawaretransformer}. | - |
dc.language | English | - |
dc.publisher | The International Conference on Learning Representations (ICLR) | - |
dc.title | Spatially-Aware Transformers for Embodied Agents | - |
dc.type | Conference | - |
dc.type.rims | CONF | - |
dc.citation.publicationname | The Twelfth International Conference on Learning Representations | - |
dc.identifier.conferencecountry | AU | - |
dc.identifier.conferencelocation | Vienna | - |
dc.contributor.localauthor | Ahn, Sungjin | - |
dc.contributor.nonIdAuthor | Cho, Junmo | - |
dc.contributor.nonIdAuthor | Yoon, Jasik | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.