Cognitive Capacity of Neuromorphic Hierarchical Modular Reservoirs

Poster No:

1176 

Submission Type:

Abstract Submission 

Authors:

Filip Milisav1, Andrea Luppi2, Laura Suárez3, Guillaume Lajoie4, Bratislav Misic1

Institutions:

1Montréal Neurological Institute, McGill University, Montréal, Canada, 2University of Oxford, Oxford, United Kingdom, 3Mila - Quebec Artifical Intelligence Institute, Montréal, Canada, 4University of Montreal, Montréal, Canada

First Author:

Filip Milisav  
Montréal Neurological Institute, McGill University
Montréal, Canada

Co-Author(s):

Andrea Luppi, PhD  
University of Oxford
Oxford, United Kingdom
Laura Suárez  
Mila - Quebec Artifical Intelligence Institute
Montréal, Canada
Guillaume Lajoie  
University of Montreal
Montréal, Canada
Bratislav Misic  
Montréal Neurological Institute, McGill University
Montréal, Canada

Introduction:

The brain is organized in a hierarchy of nested and increasingly polyfunctional circuits. This architecture is believed to strike a balance between information segregation in specialized neuronal communities and global integration via intermodular communication (Hilgetag, 2020). Yet, how hierarchical modularity shapes network function remains unclear. Here, we constrain artificial neural networks with wiring patterns informed by the hierarchical modular topology of biological neural networks. This allows us to causally relate hierarchical network architectures to cognitive capacity across a range of dynamics.

Methods:

We take advantage of reservoir computing (Lukoševičius, 2009), an ideal paradigm for neuromorphic network design. This framework pairs a nonlinear recurrent neural network (RNN; reservoir) with a linear readout module that approximates a target signal through a linear combination of the RNN signals. Only the readout module is trained, leaving RNN connectivity patterns unchanged.

Here, we use a stochastic block model (Holland, 1983) to specify 3 hierarchical modularity levels. Starting from 8 modules at the first level, intermodular connectivity is tuned at each level, successively nesting pairs of lower-order modules into higher-order modules (Fig. 1a). We generate 100 synthetic graphs per level while maintaining consistent network density and node degree. By uniformly scaling connection weights, we explore network dynamics across spectral radii (α) spanning stable (α < 1) to chaotic (α > 1) regimes.

To evaluate cognitive capacities, we test memory and multitasking performance. For memory, the readout module is trained to reproduce a delayed version of a random input signal. Performance is assessed using the R2 regression score, averaged across 16 time-lags and all pairwise combinations of the original 8 modules as input and output nodes. For multitasking, we add a sinusoidal-to-square wave transformation task, assigning memory tasks to half the modules and non-linear transformation tasks to the other half (Fig. 1c). R2 scores are averaged across all tasks.

To investigate hierarchical modularity in empirical brain networks, we use diffusion-weighted MRI data from 327 participants of the Human Connectome Project (Van Essen, 2013). Nested community partitions are identified using Louvain modularity maximization (Blondel, 2008; Fig. 1g) and (hierarchical) modular null networks are created by applying degree-preserving rewiring on between-module edges (Fig. 1h).

Results:

At criticality (α = 1), where the brain is believed to operate (Cocchi, 2017), higher-order hierarchical modular networks consistently outperform lower-order modular networks, as well as degree-preserving rewired null networks (Fig. 1b, c; insets), for both memory (Fig. 1b) and multitasking (Fig. 1c) capacity (two-tailed Wilcoxon-Mann-Whitney–WMW–tests, p < 0.05).

Further analysis of activity dynamics at criticality reveals that higher-order hierarchical modular networks exhibit longer and more diverse neural timescales (Murray, 2014; Fig. 1d), higher maximal Lyapunov exponents (Vogt, 2022; Fig. 1e) – indicating closer proximity to criticality, and increased active information storage (Boedecker, 2012; Fig. 1f; two-tailed WMW tests, p < 0.05). These properties align with a more flexible memory system.

In empirical hierarchical modularity patterns, hierarchical modular surrogates also show a higher memory capacity than strictly modular surrogates at criticality (Fig. 1h; two-tailed WMW tests, p < 10-7). These results might explain recent reports showing that reservoirs constrained with human brain connectivity patterns perform optimally near criticality (Suárez, 2021).
Supporting Image: OHBM_HMN_RC.png
 

Conclusions:

Altogether, across multiple benchmarks, these results show that hierarchical modularity can yield computationally advantageous functional properties, providing insight into the relation between brain structure and function with potential applications for neuromorphic engineering.

Learning and Memory:

Learning and Memory Other

Modeling and Analysis Methods:

Connectivity (eg. functional, effective, structural) 1
Diffusion MRI Modeling and Analysis 2

Keywords:

Cognition
Computational Neuroscience
Machine Learning
Memory
Modeling
Other - hierarchical modularity; reservoir computing; criticality

1|2Indicates the priority used for review

Abstract Information

By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.

I accept

The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information. Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:

I do not want to participate in the reproducibility challenge.

Please indicate below if your study was a "resting state" or "task-activation” study.

Other

Healthy subjects only or patients (note that patient studies may also involve healthy subjects):

Healthy subjects

Was this research conducted in the United States?

No

Were any human subjects research approved by the relevant Institutional Review Board or ethics panel? NOTE: Any human subjects studies without IRB approval will be automatically rejected.

Yes

Were any animal research approved by the relevant IACUC or other animal research panel? NOTE: Any animal studies without IACUC approval will be automatically rejected.

Not applicable

Please indicate which methods were used in your research:

Diffusion MRI
Computational modeling

Provide references using APA citation style.

Blondel, V. D. (2008). Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment, 2008(10), P10008.
Boedecker, J. (2012). Information processing in echo state networks at the edge of chaos. Theory in Biosciences, 131, 205-213.
Cocchi, L. (2017). Criticality in the brain: A synthesis of neurobiology, models and cognition. Progress in Neurobiology, 158, 132-152.
Hilgetag, C. C. (2020). ‘Hierarchy’ in the organization of brain networks. Philosophical Transactions of the Royal Society B, 375(1796), 20190319.
Holland, P. W. (1983). Stochastic blockmodels: First steps. Social networks, 5(2), 109-137.
Lukoševičius, M. (2009). Reservoir computing approaches to recurrent neural network training. Computer Science Review, 3(3), 127-149.
Murray, J. D. (2014). A hierarchy of intrinsic timescales across primate cortex. Nature Neuroscience, 17(12), 1661-1663.
Suárez, L. E. (2021). Learning function from structure in neuromorphic networks. Nature Machine Intelligence, 3(9), 771-786.
Van Essen, D. C. (2013). The WU-Minn Human Connectome Project: An overview. Neuroimage, 80, 62-79.
Vogt, R. (2022). On Lyapunov Exponents for RNNs: Understanding information propagation using dynamical systems tools. Frontiers in Applied Mathematics and Statistics, 8, 818799.

UNESCO Institute of Statistics and World Bank Waiver Form

I attest that I currently live, work, or study in a country on the UNESCO Institute of Statistics and World Bank List of Low and Middle Income Countries list provided.

No