How should tasks be represented in the brain to ensure robust performance while preserving adaptability in a dynamic environment? We used neural network models to simulate two scenarios: rich learning (structured representations tailored to task demands) and lazy learning (unstructured, task-agnostic representations). Results show that the rich learning network suffers from degraded representations after repeated training on one contingency, with replay of the other contingency acting as a remedy. In contrast, lazy learning remains unaffected by repeated training and replay. This finding provides insights on recent empirical studies showing paradoxical hippocampal replay encoding trajectories opposite to task behaviors. Stay tuned for further details in the upcoming preprint!
Fear generalization, a core symptom of PTSD, involves transferring fear responses from one context to another. In a recent study, mice exhibited retrospective fear generalization to a prior neutral context following aversive experiences. By investigating the neural representational geometry, we found that the neural representations of neutral and aversive contexts become more similar in fear-transferred mice. Furthermore, during freezing states, the neutral context shifts toward the aversive context; while during active exploration, neutral context activity obeyed a common fear transformation. This suggests a double dissociation in representational change related to retrospective fear generalization. Stay tuned for further details in the upcoming preprint!
Place cells in the hippocampus form cognitive maps to support spatial navigation and episodic memory. However, where hippocampal place cells have their fields is famously hard to predict: if you know how a given subject encodes location of environment A, that doesn’t tell you much about how it encodes B. In this project, we developed a cross-subject alignment method based on PCA and Procrustes analysis. We showed that the place cell activity of one subject can be predicted better than random from another subject, suggesting that hippocampal maps of different environments could be formed by a general rule shared by different subjects.
A key component of adaptive behaviors is the planning and coordination of action sequences to achieve a complex goal. The framework of hierarchical reinforcement learning hypothesizes that a complex goal can be divided into sub-goals. Correspondingly, a long sequence of actions are chunked into sub-routines for sub-goal completion, which then can be reused in a different task. For my post-bac research, I designed a rodent version of travel salesman task to study how sequences of goal-directed actions are coordinated in the brain.
Deep Reinforcement Learning (Deep RL) has gained widespread attention for its ability to outperform human in games like chess, go, and Atari, etc. Typically, Deep RL systems use deep neural networks to create non-linear mappings from sensory inputs to action values or action probabilities (policies) in order to maximize long-term reward. These networks are updated with error-corrected signals, often via back-propagation, to improve reward estimation and increase the frequency of highly rewarded actions. I implemented several deep reinforcement learning algorithms in Pytorch to compare with animal behaviors in a sequential decision-making task.