Distributed multi-agent preference learning for an IoT-enriched smart space

Cited 2 time in webofscience Cited 1 time in scopus
  • Hit : 203
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorSon, Heesukko
dc.contributor.authorPark, Jeongwookko
dc.contributor.authorKim, Hyunjuko
dc.contributor.authorLee, Dongmanko
dc.date.accessioned2020-05-06T01:20:39Z-
dc.date.available2020-05-06T01:20:39Z-
dc.date.created2020-04-29-
dc.date.created2020-04-29-
dc.date.created2020-04-29-
dc.date.issued2019-07-07-
dc.identifier.citation39th IEEE International Conference on Distributed Computing Systems, ICDCS 2019, pp.2090 - 2100-
dc.identifier.urihttp://hdl.handle.net/10203/274055-
dc.description.abstractThere have been several efforts on preference learning in a smart space by means of multi-agent collaborations. Each agent captures a user action or handles part of learning but decision makings are done in a centralized manner. This makes it difficult for a smart space to deal with learning complexity due to the increase and reconfiguration of smart devices. While the complexity is relieved by articulating the learning space, it is not flexible because the articulation procedure needs to be resumed whenever a smart space reconfiguration occurs. In this paper, we propose a distributed multi-agent preference learning architecture which allows a group of physically separate agents to collaborate with each other for learning a user's task preference efficiently in an IoT enriched smart space. For this, the proposed scheme provides four key features: ontology-based knowledge structure for task-driven agent collaboration, knowledge exchange protocol for task-aware causality among agents, Q-learners for observing and learning from user behaviors, and negotiation and acknowledgement protocol for preventing agents from performing disorganized actions. Evaluation results show that the proposed scheme allows smart device agents to learn user preferences in a fully distributed way and outperforms existing approaches in terms of learning speed and system overhead.-
dc.languageEnglish-
dc.publisherInstitute of Electrical and Electronics Engineers Inc.-
dc.titleDistributed multi-agent preference learning for an IoT-enriched smart space-
dc.typeConference-
dc.identifier.wosid000565234200195-
dc.identifier.scopusid2-s2.0-85074815971-
dc.type.rimsCONF-
dc.citation.beginningpage2090-
dc.citation.endingpage2100-
dc.citation.publicationname39th IEEE International Conference on Distributed Computing Systems, ICDCS 2019-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationDallas, TX-
dc.identifier.doi10.1109/ICDCS.2019.00206-
dc.contributor.localauthorSon, Heesuk-
dc.contributor.nonIdAuthorPark, Jeongwook-
dc.contributor.nonIdAuthorKim, Hyunju-
dc.contributor.nonIdAuthorLee, Dongman-
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0