NeuroAI: artificial neural networks as models of the brain in cognitive neuroscience

Isil Poyraz Bilgin, Dr. Organizer
Research Center of the University Institute of Geriatrics of Montreal
Montreal, Quebec 
Canada
 
Pierre Bellec Co Organizer
CRIUGM, University of Montreal
Psychologie
Montreal, Québec 
Canada
 
Elizabeth DuPre Co Organizer
Stanford University
Stanford, CA 
United States
 
Saturday, Jul 22: 8:00 AM - 5:00 PM
2301 
Educational Course - Full Day (8 hours) 
Palais 
Room: 511BE 
While the brain’s rich information processing capabilities served as the key inspiration in developing artificial intelligence (AI), new AI models have yet to revolutionize our understanding of the brain. Despite significant ongoing work applying deep learning models to large, structured neuroimaging datasets to improve e.g., diagnosis of disease or prognosis of patient outcomes, the link between these models and normative brain function remains unclear. NeuroAI is a relatively new subfield that aims to more directly bridge cognitive neuroscience and AI by asking whether artificial models can serve as testbeds for new cognitive hypotheses, revealing fundamental insights into information processing across biological and artificial systems. Today, an increasing number of neuroscientists implement AI in their research to model complex and multidimensional stimuli, enabling direct comparisons between learned artificial and biological representations using various methods.

While these findings have generated excitement in the field, the supporting methods remain concentrated in a few research centers. This course aims to increase the accessibility of NeuroAI, both in (1) framing cognitive questions with artificial models and (2) applying these methods to neuroimaging data. Our speakers are field experts in artificial intelligence models and their incorporation into cognitively driven research programs. They will discuss the challenges and opportunities in analyzing these models alongside neuroimaging data, with desired learning outcomes including the major theoretical frameworks and models used in NeuroAI; choice of the appropriate model, dataset, and data processing steps; and practical and philosophical considerations in interpreting the relative similarity between neuroimaging data and neural network models.

Objective

(1) Participants will be able to have an overview of the theoretical motivations for how artificial intelligence models would serve as a toolset in answering new questions in cognitive neuroscience research.
(2) Participants will be able to gain practical insights into framing their scientific questions in NeuroAI with the choice of appropriate artificial intelligence models, dataset, and data processing steps.
(3) Participants will leave with an understanding of the opportunities in incorporating artificial intelligence into cognitive neuroscience alongside the potential methodological and ethical pitfalls. 

Target Audience

This course will be useful for researchers who are interested in leveraging rich artificial intelligence models to understand cognition and behavior but aren’t sure where to begin in this space. It is largely geared towards those who have some prior experience in statistical inference and machine learning and who are interested in building their expertise with artificial models. Researchers with a strong background in these areas can also benefit from our focus on the theoretical motivations for NeuroAI, potential pitfalls in practice, and open issues for the field. 

Presentations

Course introduction and overview of NeuroAI

NeuroAI research makes use of artificial neural networks (ANNs) developed by the AI community to model the activity of the brain or, conversely, uses observations of the brain to infer better, more effective AI models. This broad research topic encompasses a rich variety of application tasks, ranging from vision to language but also increasingly including domains such as working memory or control tasks. NeuroAI also incorporates a wide range of techniques, from established analyses predicting brain activity using ANNs (brain encoding) to inferring stimuli from brain activity using ANNs (brain decoding), as well as emerging topics such as end-to-end training of ANNs to mimic brain activity or crafting stimuli maximizing brain responses using ANNs. This introduction will provide prominent examples of neuroAI research studies to illustrate major application domains and techniques, and conceptualize how the different lectures of the course will help the audience get exposed to the most salient areas of the neuroAI research space.  

Presenter

Pierre Bellec, CRIUGM, University of Montreal
Psychologie
Montreal, Québec 
Canada

Technical and interpretive issues for constructing brain models from AI systems

The alignment of the intellectual goals of neuroscience and AI have led to a long and fruitful exchange of ideas. Over the years, neuroscience has provided AI with brain-inspired computational motifs that have been implemented and scaled to solve hard tasks. Recently, these implemented solutions to hard tasks have been converted into models that make highly accurate predictions of activity in sensing and acting brains. This lecture will focus on technical and interpretive issues around constructing brain models out of AI systems. We will begin with a discussion of the data and compute requirements for building brain models based on AI systems, followed by a discussion of statistical techniques for linking high-dimensional AI systems with high-dimensional dataset. We will then discuss some of the challenges of interpreting mappings between components of the brain and AI systems, exploring the tension between mechanistic interpretations (e.g., “this piece of a neural network is like that piece of a brain”) and agnostic interpretations (e.g., “this AI system includes representations that span the space of measured brain activities”). 

Presenter

Thomas Naselaris, Medical University of South Carolina Charleston, SC 
United States

Brain encoding and decoding using biologically constrained DNNs

A major goal in cognitive neuroscience research is to better understand the neural basis of cognitive functions. Artificial intelligence provides new avenues for modeling, simulating or even modulating the comprehensive processes of human cognition. By leveraging human brain atlas and connectome priors, we proposed a biologically-constrained GNN (BGNN) model to effectively combine local and distributed brain activities. The BGNN model learns multistage (temporally) and multilevel (spatially) latent representations transforming from sensory processing to representational abstraction (encoding phase) and predicts cognitive states using embedded representations at fine timescales (decoding phase). Moreover, it uncovered inter-subject aligned, behaviorally relevant neural representations underpinning cognitive processes and achieved better decoding of cognitive tasks. This approach has shown promising findings in representational learning of cognitive function and biologically meaningful interpretations of AI modeling of human cognition. In this hand-on session, I will introduce how to build such a biologically-constrained AI model by incorporating human brain atlas and connectome priors, how to optimize the DNN architecture for brain encoding and decoding, and how to interpret the representations learnt from AI models. 

Presenter

Yu Zhang, Zhejiang Lab
Artificial Intelligence Research Institute
Hangzhou, Zhejiang 
China

Neural Network Language Models Capture Language Processing in Human Cortex

Over the past years, a strong alignment has been observed between language model representations and brain activity recordings of individuals that process the same text. This has led to a lot of interest in using these models to understand more about how the brain comprehends language. However, this goal remains challenging due to the complexity of brain activity and the uninterpretable nature of network representations. We explain how to use "computational controls" to make targeted hypothesis tests using encoding models built on representations extracted from neural networks. We further show how to better learn these encoding models and how to interpret them to explore the tuning of different brain regions. 

Presenter

Leila Wehbe, Carnegie Mellon University Pittsburgh, PA 
United States

Testing representational models via representational similarity analysis

Brains and artificial neural networks (ANNs) represent their inputs, such as images and sounds, in the multidimensional space of their neurons' activations. Examining these representational spaces can help us understand how these networks work and identify similarities between them. The technique of representational similarity analysis (RSA) is commonly used in NeuroAI to study both biological and artificial neural networks. In this talk, we will discuss the challenges and recent advances in using RSA to test representational models. Specifically, we will focus on the importance of the Riemannian manifold in evaluating representational distances in RSA. Comparisons between brain regions or layers of a computational neural network are frequently done in NeuroAI research. However, considering the underlying geometry can lead to different conclusions about representational similarities. We will demonstrate how considering the geometry of all positive-semi definite (PSD) matrices when quantifying the relationship between representational matrices can significantly improve RSA. We will begin by providing a brief introduction to Riemannian geometry and the geodesic distance between two PSD matrices. Using simulations and real data, we will illustrate situations where comparing representational matrices using the Riemannian distance improves RSA
 

Presenter

Hamed Nili, Dr., University Medical Center Hamburg-Eppendorf
Department of Excellence for Neural Information Processing
Hamburg, N.A. 
Germany

Language modeling beyond language modeling

Language models that have been trained to predict the next word over billions of text documents have been shown to also significantly predict brain recordings of people comprehending language. Understanding the reasons behind the observed similarities between language in machines and language in the brain can lead to more insight into both systems. Additionally, the human language system integrates information from multiple sensory modalities which puts text-only language models at a fundamental disadvantage as cognitive models.


In this talk, we will discuss a series of recent works that make progress towards these questions along different dimensions. The unifying principle among these works that allows us to make scientific claims about why one black box (language model) aligns with another black box (the human brain) is our ability to make specific perturbations in the language model and observe their effect on the alignment with the brain. Building on this approach, these works reveal that the observed alignment is due to more than next-word prediction and word-level semantics and is partially related to joint processing of select linguistic information in both systems. Furthermore, we find that the brain alignment can be improved by training a language model to summarize narratives, and to incorporate auditory and visual information from an ongoing event. Taken together, these works make progress towards determining the sufficient and necessary conditions under which language in machines aligns with language in the brain. 

Presenter

Mariya Toneva, Dr., Max Planck Institute for Software Systems Saarbruecken, Saarland 
Germany

Aligning representations across individual models

Computational neuroscience is focused on uncovering general organizational principles supporting neural activity and behavior; however, uncovering these principles relies on making appropriate comparisons across individuals. This presents a core technical and conceptual challenge, as individuals differ along nearly every relevant dimension: from the number of neurons supporting computation to the exact computation being performed. Similarly in artificial neural networks, multiple initializations of the same architecture—on the same data—may recruit non-overlapping hidden units, complicating direct comparisons of trained networks.

In this talk, I will introduce techniques for aligning representations in both brains and in machines. I will argue for the importance of considering alignment methods in developing a comprehensive science at the intersection of artificial intelligence and neuroscience that reflects our shared goal of understanding principles of computation. Finally, I will consider current applications and limitations of these techniques, discussing relevant future directions for this area.
 

Presenter

Elizabeth DuPre, Stanford University Stanford, CA 
United States

Ethics in NeuroAI: In the Making of the NeuroAI Responsible

The recent decade witnessed a fast revolution of artificial intelligence and its integration in the biological sciences. AI-powered methodologies and tools serve as important facilitators in decoding brain data for clinical and commercial purposes. However, one of the core remaining challenges in building intelligent agents is responsibly handling the training data and resources.

NeuroAI cannot be considered as a separate discipline but a multidisciplinary effort of a crossroad between AI, computer science, neuroscience, psychology, linguistics, philosophy, law, and ethics. As researchers, industry professionals, and members of the community, we all are required to follow responsible practices in data use and protection, fair and transparent resource allocations, and elimination of biases and discrimination that are inherent in AI applications. In this talk, we will discuss the current and future ethical concerns the NeuroAI faces today, share the current initiatives in making the NeuroAI more responsible and fair, and help attendees understand necessary steps toward adopting ethical practices in future work in this area.  

Presenter

Isil Poyraz Bilgin, Dr., Research Center of the University Institute of Geriatrics of Montreal Montreal, Quebec 
Canada

Navigating the Future of NeuroAI: Exploring the Role of Neuroimaging Research

The panel discussion aims to identify the current gaps and challenges within the field of NeuroAI and to explore the potential future directions of research. The panel will specifically focus on the significance and impact of neuroimaging research in advancing the field. The ultimate objective of this discussion is to equip early career researchers with valuable insights and guidance for a successful scientific career in NeuroAI.

The panel discussion aims to convey an interactive component to have questions from a moderator that are collected during the whole session throughout the day via an online query platform and selected by the moderator. The moderator will also have additional questions in case of not enough questions are raised by the audience throughout the session.  

Presenter

Shahab Bakhtiari, Dr., Université de Montréal Montreal, Quebec 
Canada