Exploring Feature Dimensions to Learn a New Policy in an Uninformed Reinforcement Learning Task

Cited 8 time in webofscience Cited 0 time in scopus
  • Hit : 704
  • Download : 122
When making a choice with limited information, we explore new features through trial-and-error to learn how they are related. However, few studies have investigated exploratory behaviour when information is limited. In this study, we address, at both the behavioural and neural level, how, when, and why humans explore new feature dimensions to learn a new policy for choosing a state-space. We designed a novel multi-dimensional reinforcement learning task to encourage participants to explore and learn new features, then used a reinforcement learning algorithm to model policy exploration and learning behaviour. Our results provide the first evidence that, when humans explore new feature dimensions, their values are transferred from the previous policy to the new online (active) policy, as opposed to being learned from scratch. We further demonstrated that exploration may be regulated by the level of cognitive ambiguity, and that this process might be controlled by the frontopolar cortex. This opens up new possibilities of further understanding how humans explore new features in an open-space with limited information.
Publisher
NATURE PUBLISHING GROUP
Issue Date
2017-12
Language
English
Article Type
Article
Citation

SCIENTIFIC REPORTS, v.7, pp.17676

ISSN
2045-2322
DOI
10.1038/s41598-017-17687-2
URI
http://hdl.handle.net/10203/238813
Appears in Collection
BiS-Journal Papers(저널논문)
Files in This Item
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 8 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0