Biologically inspired deep learning models

Social AI Research Group

At the Vector Institute, I worked in the Social AI Lab on biologically inspired learning systems that explore how neural networks can mimic human perception and adaptation. My research focused on Hebbian and BCM-style plasticity, predictive coding, and sparse representation learning — studying how neurons could self-organize without explicit backpropagation.

I developed Python tooling for visualizing layer-wise activation dynamics, tracking co-activation statistics, and benchmarking learning stability under different inhibitory and competitive constraints. This involved building and testing custom PyTorch modules such as Locally Competitive Algorithm (LCA) convolutional layers, integrating inhibition, reconstruction loss, and activity regularization into standard deep learning workflows.

The project contributes to a larger goal of developing energy-efficient, self-supervised learning mechanisms that could power future neuromorphic systems. My work bridges neuroscience-inspired theory with practical machine learning implementations, emphasizing interpretability, robustness, and low-power computation.