Research projects

Hippocampal representational geometry underlying fear generalization

Fear generalization, a core symptom of PTSD, involves transferring fear responses from one context to another. A recent study demonstrated that mice generalize fear retrospectively to a previously neutral context after aversive experiences in another context. This raises intriguing questions: how do these memories become similar? Does the neutral memory transform towards the aversive context, or is there a common fear-coding direction for both memories? Want to know the answer? I am presenting this result at SfN 2023, come to my poster if you want to know more!

Schematic of representational change in fear generalization

Shared representational geometry in the rodent hippocampus

[Curr Bio, 2021] [Featured Dispatch] [Code]

Place cells in the hippocampus form cognitive maps to support spatial navigation and episodic memory. However, where hippocampal place cells have their fields is famously hard to predict: if you know how a given subject encodes location of environment A, that doesn’t tell you much about how it encodes B. In this project, we developed a cross-subject alignment method based on Procrustes analysis. We showed that the place cell activity of one subject can be predicted better than random from another subject, suggesting that hippocampal maps of different environments could be formed by a general rule shared by different subjects.

Schematic of Hypertransform procedure

Hierarchical reinforcement learning in rodents


Schematic of HRL task

A key component of adaptive behaviors is the planning and coordination of action sequences to achieve a complex goal. The framework of hierarchical reinforcement learning hypothesizes that a complex goal can be divided into sub-goals. Correspondingly, a long sequence of actions are chunked into sub-routines for sub-goal completion, which then can be reused in a different task. We designed a rodent version of travel salesman task to study how sequences of goal-directed actions are coordinated in the brain.

Deep reinforment learing models

[Deep Q-learning] [Hierarchical-DQN] [Deep Policy Gradient]


Deep Reinforcement Learning (Deep RL) has gained widespread attention for its ability to outperform human in games like chess, go, and Atari, etc. Typically, Deep RL systems use deep neural networks to create non-linear mappings from sensory inputs to action values or action probabilities (policies) in order to maximize long-term reward. These networks are updated with error-corrected signals, often via back-propagation, to improve reward estimation and increase the frequency of highly rewarded actions. Here are my implementations of several deep reinforcement learning models in Pytorch.