Bayes-Adaptive Monte-Carlo Planning and Learning for Goal-Oriented Dialogues

Cited 4 time in webofscience Cited 0 time in scopus
  • Hit : 147
  • Download : 0
We consider a strategic dialogue task, where the ability to infer the other agent’s goal is critical to the success of the conversational agent. While this problem can be naturally formulated as Bayesian planning, it is known to be a very difficult problem due to its enormous search space consisting of all possible utterances. In this paper, we introduce an efficient Bayes-adaptive planning algorithm for goal-oriented dialogues, which combines RNN-based dialogue generation and MCTS-based Bayesian planning in a novel way, leading to robust decision-making under the uncertainty of the other agent’s goal. We then introduce reinforcement learning for the dialogue agent that uses MCTS as a strong policy improvement operator, casting reinforcement learning as iterative alternation of planning and supervised-learning of self-generated dialogues. In the experiments, we demonstrate that our Bayes-adaptive dialogue planning agent significantly outperforms the state-of-the-art in a negotiation dialogue domain. We also show that reinforcement learning via MCTS further improves end-task performance without diverging from human language.
Publisher
Association for the Advancement of Artificial Intelligence
Issue Date
2020-02-11
Language
English
Citation

34th AAAI Conference on Artificial Intelligence (AAAI 2020), pp.7994 - 8001

ISSN
2159-5399
URI
http://hdl.handle.net/10203/278165
Appears in Collection
RIMS Conference Papers
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 4 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0