DC Field | Value | Language |
---|---|---|
dc.contributor.author | Park, Bumjin | ko |
dc.contributor.author | Kang, Cheongwoong | ko |
dc.contributor.author | Choi, Jaesik | ko |
dc.date.accessioned | 2023-09-08T07:03:12Z | - |
dc.date.available | 2023-09-08T07:03:12Z | - |
dc.date.created | 2023-09-08 | - |
dc.date.issued | 2021-10-12 | - |
dc.identifier.citation | 2021 21st International Conference on Control, Automation and Systems (ICCAS), pp.104 - 107 | - |
dc.identifier.issn | 2093-7121 | - |
dc.identifier.uri | http://hdl.handle.net/10203/312362 | - |
dc.description.abstract | In this paper, we designed reinforcement learning environment for distributed patrolling agents. In the partially observable environment, the agents take actions for each one's interest and the non-stationary problem in multi-agent setting encourages the agents not to invade other agent's region. In our environment, the patrolling routes for the agents are generated implicitly. We suggested different types of the environments and evaluated with different initial positions of the agents. We also show how the reinforcement learning algorithm changes the distribution of agents as training time goes. | - |
dc.language | English | - |
dc.publisher | IEEE | - |
dc.title | Generating Multi-agent Patrol Areas by Reinforcement Learning | - |
dc.type | Conference | - |
dc.identifier.wosid | 000750950700014 | - |
dc.type.rims | CONF | - |
dc.citation.beginningpage | 104 | - |
dc.citation.endingpage | 107 | - |
dc.citation.publicationname | 2021 21st International Conference on Control, Automation and Systems (ICCAS) | - |
dc.identifier.conferencecountry | KO | - |
dc.identifier.conferencelocation | Jeju Island | - |
dc.identifier.doi | 10.23919/iccas52745.2021.9650047 | - |
dc.contributor.localauthor | Choi, Jaesik | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.