DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim Y. | - |
dc.contributor.author | Kim, Kee-Eung | - |
dc.date.accessioned | 2013-03-29T07:36:19Z | - |
dc.date.available | 2013-03-29T07:36:19Z | - |
dc.date.created | 2012-02-06 | - |
dc.date.issued | 2010-08-30 | - |
dc.identifier.citation | 11th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2010, v., no., pp.614 - 619 | - |
dc.identifier.issn | 0302-9743 | - |
dc.identifier.uri | http://hdl.handle.net/10203/169081 | - |
dc.description.abstract | We present a memory-bounded approximate algorithm for solving infinite-horizon decentralized partially observable Markov decision processes (DEC-POMDPs). In particular, we improve upon the bounded policy iteration (BPI) approach, which searches for a locally optimal stochastic finite state controller, by accompanying reachability analysis on controller nodes. As a result, the algorithm has different optimization criteria for the reachable and the unreachable nodes, and it is more effective in the search for an optimal policy. Through experiments on benchmark problems, we show that our algorithm is competitive to the recent nonlinear optimization approach, both in the solution time and the policy quality. | - |
dc.language | ENG | - |
dc.title | Point-based bounded policy iteration for decentralized POMDPs | - |
dc.type | Conference | - |
dc.identifier.scopusid | 2-s2.0-78049238644 | - |
dc.type.rims | CONF | - |
dc.citation.beginningpage | 614 | - |
dc.citation.endingpage | 619 | - |
dc.citation.publicationname | 11th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2010 | - |
dc.identifier.conferencecountry | South Korea | - |
dc.identifier.conferencecountry | South Korea | - |
dc.contributor.localauthor | Kim, Kee-Eung | - |
dc.contributor.nonIdAuthor | Kim Y. | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.