More about Adam Gosztolai:
Dynamics Of Neural Systems Laboratory
Title and abstract of his presentation:
MARBLE: interpretable representations of neural population dynamics using geometric deep learning
It is increasingly recognised that computations in the brain and artificial neural networks can be understood as outputs of a high-dimensional dynamical system conformed by the activity of large neural populations. Yet revealing the structure of the underpinning latent dynamical processes from data and interpreting their relevance in computational tasks remains a fundamental challenge. A prominent line of research has observed that task-relevant neural activity often takes place on low-dimensional smooth subspaces of the state space called neural manifolds. However, there is a lack of theoretical frameworks for the unsupervised representation of neural dynamics that are interpretable based on behavioural variables, comparable across systems, and decodable to behaviour with high accuracy.
In my talk, I will introduce Manifold Representation Basis Learning (MARBLE), a fully unsupervised representation-learning framework for non-linear dynamical systems. This approach combines empirical dynamical modelling and geometric deep learning to transform neural activations during a set of trials into statistical distributions of local flow fields (LFFs). Our central insight is that LFFs vary continuously over the neural manifold, allowing for unsupervised learning, and are preserved under different manifold embeddings, allowing the comparison of neural computations across neural networks and animals.
I will then show that MARBLE offers a well-defined similarity metric between different dynamical systems that is expressive enough to compare computations and detect fine-grained changes in dynamics due to task variables, e.g., decision thresholds and gain modulation. Being unsupervised, MARBLE is uniquely suited to biological discovery. I will show that it discovers more interpretable latent representations in several motor, navigation and cognitive tasks than autoregressive models such or (semi-)supervised representation learning methods. Intriguingly, this interpretability is highly advantageous for performance in downstream tasks, such as decoding neural activity into behaviour. Our results suggest that using the manifold structure yields a new class of algorithms with higher performance and the ability to assimilate data across experiments.
Thursday, July 3rd, 2025 at 4:30 pm // Kolingasse 14-16, 1090 Wien, SR 5, ground floor