Reinforcement learning strategies support generalization of learned hierarchical knowledge

dc.contributor.advisorPreston, Alison
dc.creatorMcKee, Connor
dc.date.accessioned2021-12-09T14:09:03Z
dc.date.available2021-12-09T14:09:03Z
dc.date.issued2021
dc.description.abstractIn our everyday lives, we must learn and utilize context-specific information to inform our decision making. How do we learn what choices to make based on our memories? Prior rodent work has demonstrated that after learning, knowledge becomes organized hierarchically in a context-dependent manner. Here, we quantify the emergence of context-dependent hierarchical knowledge during learning and examine the flexible use of that knowledge to generalize across different scenarios. Participants learned about objects with context-dependent reward values in an X-shaped virtual environment consisting of an elongated, contextually-varying hallway with decision points on either end. First, participants learned the context-dependent object-reward pairings for one set of three objects. Next, they learned the context-dependent object-reward pairings for a new set of three objects. We hypothesized that prior knowledge of the hierarchical structure would generalize to the second set of objects as evidenced by a facilitation in learning rates. Participants gradually learned the context-dependent object-reward pairings during learning. When introduced to the new object set, learning rates did not significantly differ, indicating generalization of the hierarchical reward structure to the new object set. To further quantify how decision making unfolded, we applied three types of reinforcement learning (RL) models to our behavioral data: model-free, model-based (MB), and combination model-based model-free (MBMF). The MB model performed the best at using participants’ past selections to successfully predict future decisions and reward value expectations, indicating that current decisions were guided by prior selections. The MBMF model was best able to represent changes in participant learning across runs, possibly due to the model’s ability to assess different learning strategies. Overall, our results demonstrate that participants learned to flexibly decide which actions were the most adaptive, promoting correct decision-making in a given context. Furthermore, the structure of prior knowledge may support the generalization of learned experience.en_US
dc.description.departmentNeuroscienceen_US
dc.identifier.urihttps://hdl.handle.net/2152/90786
dc.identifier.urihttp://dx.doi.org/10.26153/tsw/17705
dc.language.isoengen_US
dc.relation.ispartofHonors Thesesen_US
dc.rights.restrictionOpenen_US
dc.subjectNeuroscienceen_US
dc.subjectpsychologyen_US
dc.subjectlearningen_US
dc.subjectmemoryen_US
dc.subjectgeneralizationen_US
dc.subjectnavigationen_US
dc.subjectreinforcement learningen_US
dc.subjectcomputational modelingen_US
dc.titleReinforcement learning strategies support generalization of learned hierarchical knowledgeen_US
dc.typeThesisen_US

Access full-text files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Final Honors Thesis (2).pdf
Size:
1.68 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.64 KB
Format:
Item-specific license agreed upon to submission
Description:

Collections