Generative models of brain networks

Richard Betzel Presenter
Indiana University
Bloomington, IN 
United States
 
Sunday, Jun 23: 9:00 AM - 1:00 PM
Educational Course - Half Day (4 hours) 
COEX 
Room: ASEM Ballroom 201 
Brain networks are high-dimensional (many nodes, many edges) and their topological features summarized succinctly using network and graph-theoretic metrics. It is commonplace to test whether topological feature X could be explained by chance using network-based null models—most often rewiring based models that hold constant low-level features of the network, e.g. density and degree sequence, but otherwise randomize its edges.

Generative models, in contrast, seek to construct synthetic networks based on a set of (simple) mechanistic and generative rules. The synthetic networks can then be compared to the empirical/observed network in terms of their topological features. In general, the aim of generative modeling is to minimize the reconstruction error between the synthetic and observed network. Their wiring rules are variable and typically specified by the user, making it possible to adjudicate between distinct hypotheses of network formation, growth, and evolution.

In this talk I will review several classical generative models in network science—“random graphs”, small-world, preferential attachment, and geometric models. I will focus on a class of generative models that have been applied widely to brain network data. Briefly, these models balance wiring cost minimization and topology to achieve a broad correspondence between synthetic and observed brain networks. I will close by discussing future directions for generative modeling of brain network data.