In the past two decades, brain connectomics has evolved into a major concept in neuroscience, with an increasing amount of studies of in vivo structural, functional, and more recently molecular networks in the human brain. The potential of connectivity metrics for use in clinical applications, as biomarkers, has been frequently suggested. However, most connectivity metrics have remained confined to proof-of-concept, explorative research, with very limited systematic replication and validation, and finally translation to clinical practice.
In this symposium we discuss some of the most pressing methodological considerations that still need to be addressed by neuroscientists in order to obtain robust connectivity markers for inference and prediction. Both statistical and biological considerations will be covered, on the grounds that not only prediction of phenotypes from activation/connectivity data, but also understanding the mechanisms underlying inter-individual connectivity differences, are necessary to obtain robust, generalizable topographic substrates.
First, Simon Eickhoff will discuss the confounders and caveats in connectivity research and brain-based predictions of individual (neuropsychological or clinical) phenotypes. Second, Christian Habeck will introduce the effects of sample size and heterogeneity on network topographies generated from functional and structural data, using real and simulated data, challenging the assumption that increasing sample size necessarily improves prediction and measurement precision. Then, Alessandro Gozzi will present some misconceptions of interpretations of brain activity and connectivity by showing results of multimodal studies that question the biological basis of hypo- and hyper-connectivity. Finally, Débora Peretti will discuss challenges in the clinical application of structural, functional and molecular connectivity biomarkers and the necessary steps to go from research to clinical application, and how to optimally implement them.
The audience will be able to:
1) Learn current challenges and caveats of brain connectivity metrics used in neuroimaging research, and in clinically-oriented research in particular.
2) Understand brain connectivity findings and interpret them critically.
3) Design robust brain connectivity studies that account for major technical and biological confounders.
The target audience for this symposium is:
1) Neuroscientists in all career stages, interested in mapping brain connectivity in the human brain, including patients with neurodegenerative or psychiatric disorders.
2) Clinical researchers in all career stages interested in the potential clinical applications of brain connectivity.
In-vivo analyses of structural, functional, and more recently molecular connectivity in the human brain as well as their changes in patients with neurological or psychiatric disorders have opened new avenues for understanding large-scale integration in the human brain and its dysfunction. Moreover, combined with machine-learning methods for predicting individual phenotypes from such data, they open the possibility for inference on unobserved traits in single-subjects. While the perspectives of these approaches for clinical and neuropsychological assessment are substantial, the major part of this talk will be focused on several critical yet often underappreciated challenges for such endeavours. These include on the one hand technical and biological aspects that may undermine the validity of prediction results, in particular due to the inherent low-dimensional structure of biological variability. On the other hand, ethical, legal, and societal aspects will ultimately shape practical adaptation but need stronger consideration in the development of new pipelines if these are to move beyond proof-of-concept work.
, Institute for Systems Neuroscience, Medical Faculty, Heinrich-Heine University Düsseldorf Düsseldorf, North Rhine–Westphalia Land
Inter-subject covariance analysis is indispensable for the analysis of cross-sectional data sets and has enjoyed successful applications in neuroimaging analytics with out-of-sample prediction going to the late 1980s. Inter-subject covariance is at the heart of many applications, giving rise to a multitude of clustering and multivariate-decomposition approaches. Current studies focus on the derivation of group-level covariance patterns without fitting deeper model architectures that might be prone to overfitting. Since prediction of endpoints such as cognitive performance or diagnostic scores is only one side of the coin of imaging neuroscience, studies have also focused on the precision and accuracy of the derived activation/connectivity patterns. This talk will present the results of a variety of data modalities (task-based activation with a number of subjects of the order of hundreds, United Kingdom Biobank volumetric data with a number of subjects of the order of thousands, and simulated synthetic data) that were used to probe the stability of estimated covariance patterns and out-of-sample prediction as a function of sample size. While increasing sample size resulted in monotonic improvement of derived pattern stability and out-of-sample endpoint prediction, asymptotic limits were reached for all metrics before exhausting the available data. This contradicts the assumption that any targeted group-level central tendency in data gets ever sharper with minimization of noise sources with a higher number of subjects. Instead, irreducible heterogeneity imposes limits on analytic frameworks oriented towards group-level covariance patterns. A simple toy model of synthetic covariance data with different noise parameters can elucidate the behaviour for topographic stability and held-out outcome prediction for real-world data under variation of training sample sizes. For medium sample size, in the order of hundreds of subjects, inter-subject covariance analysis performs well. For larger datasets, it represents a useful benchmark for more sophisticated deep-learning architectures that allow person-specific variation in topographic patterns.
This talk will demystify some misconceptions related to the use of fMRI connectivity to infer underlying patterns of brain activity and connectivity. Specifically, empirical evidence of topographic convergence between structural and functional connectivity has prompted the widely held assumption that structural connectivity (i.e., axonal output) drives fMRI connectivity. Within this framework, reduced or increased activity in a brain region should thus result in reduced (hypo-) or increased (hyper-) connectivity with the region’s targets. This talk will challenge this conceptual framework by presenting the results of chemogenetic and electrophysiological studies in mice showing how fMRI hyper- and hypoconnectivity may counterintuitively reflect reduced and increased cortical activity, respectively. This updated framework may offer novel opportunities to biologically decode fMRI dysconnectivity in human disorders.
Brain connectivity is a research topic that has been under study for over 30 years. Several studies have repeatedly shown how connectivity changes from healthy ageing to neurodegenerative and psychiatric disorders and have further discussed its clinical applicability. From diagnostic and prognostic information, clustering patients in heterogeneous disease stages to outlying targeting interventions, assessing treatment effect and pre-operative mapping, research studies have shown the added value of understanding the underlying mechanisms behind brain connections. Yet, despite the many applications that have been suggested, from subject selection to assessment of drug clinical trials, biomarkers of connectivity have not found their way into clinical research, with only rare exceptions. This talk will focus on the possible use of biomarkers based on brain connectivity in clinical research, focusing on the challenges for their clinical application, the needed steps to go from research to application and how to optimally implement them. Scaled subprofile modelling using principal component analysis will be used as an example of a marker that has been systematically and successfully validated, and is mostly ready for use in clinical research, and how it compares to other connectivity metrics.
, University of Geneva Geneva, Geneva