Research projects

Paradoxical replay maintains balanced and robust task representations

Schematic of replay reshaping task representations

How should tasks be represented in the brain to ensure robust performance while preserving adaptability in a dynamic environment? We used neural network models to simulate two scenarios: rich learning (structured representations tailored to task demands) and lazy learning (unstructured, task-agnostic representations). Results show that the rich learning network suffers from degraded representations after repeated training on one contingency, with replay of the other contingency acting as a remedy. In contrast, lazy learning remains unaffected by repeated training and replay. This finding provides insights on recent empirical studies showing paradoxical hippocampal replay encoding trajectories opposite to task behaviors. Stay tuned for further details in the upcoming preprint!

Hippocampal representational geometry underlying fear generalization

[SfN Poster, 2023]

Fear generalization, a core symptom of PTSD, involves transferring fear responses from one context to another. In a recent study, mice exhibited retrospective fear generalization to a prior neutral context following aversive experiences. By investigating the neural representational geometry, we found that the neural representations of neutral and aversive contexts become more similar in fear-transferred mice. Furthermore, during freezing states, the neutral context shifts toward the aversive context; while during active exploration, neutral context activity obeyed a common fear transformation. This suggests a double dissociation in representational change related to retrospective fear generalization. Stay tuned for further details in the upcoming preprint!

Schematic of representational change in fear generalization

Shared representational geometry in the rodent hippocampus

[Curr Bio, 2021] [Featured Dispatch] [Code]

Place cells in the hippocampus form cognitive maps to support spatial navigation and episodic memory. However, where hippocampal place cells have their fields is famously hard to predict: if you know how a given subject encodes location of environment A, that doesn’t tell you much about how it encodes B. In this project, we developed a cross-subject alignment method based on PCA and Procrustes analysis. We showed that the place cell activity of one subject can be predicted better than random from another subject, suggesting that hippocampal maps of different environments could be formed by a general rule shared by different subjects.

Schematic of Hypertransform procedure

Hierarchical reinforcement learning in rodents

[Video]

Schematic of HRL task

A key component of adaptive behaviors is the planning and coordination of action sequences to achieve a complex goal. The framework of hierarchical reinforcement learning hypothesizes that a complex goal can be divided into sub-goals. Correspondingly, a long sequence of actions are chunked into sub-routines for sub-goal completion, which then can be reused in a different task. For my post-bac research, I designed a rodent version of travel salesman task to study how sequences of goal-directed actions are coordinated in the brain.

Deep reinforment learing models

[Deep Q-learning] [Hierarchical-DQN] [Deep Policy Gradient]

deep-Q-learning

Deep Reinforcement Learning (Deep RL) has gained widespread attention for its ability to outperform human in games like chess, go, and Atari, etc. Typically, Deep RL systems use deep neural networks to create non-linear mappings from sensory inputs to action values or action probabilities (policies) in order to maximize long-term reward. These networks are updated with error-corrected signals, often via back-propagation, to improve reward estimation and increase the frequency of highly rewarded actions. I implemented several deep reinforcement learning algorithms in Pytorch to compare with animal behaviors in a sequential decision-making task.