Individual differences in prefrontal coding of visual features

Presented During:

Wednesday, June 26, 2024: 11:30 AM - 12:45 PM
COEX  
Room: Grand Ballroom 104-105  

Poster No:

2539 

Submission Type:

Abstract Submission 

Authors:

Qi Lin1, Hakwan Lau1

Institutions:

1RIKEN, Wako, Saitama

First Author:

Qi Lin  
RIKEN
Wako, Saitama

Co-Author:

Hakwan Lau  
RIKEN
Wako, Saitama

Introduction:

Sitting at the apex of the cortical hierarchy (Young, 1992), the lateral prefrontal cortex (LPFC) is most commonly associated with complex and high-level cognition. Does LPFC also play a role in visual perception?
Evidence from nonhuman primates has characterized the representation of visual information in LPFC, even when there is no cognitive task involved (Riley et al., 2017; Tsao et al., 2008; Haile et al., 2019). In addition, feedback signals from LPFC seem necessary for recognizing objects under challenging conditions (Kar & DiCarlo, 2021). Yet to date, with few exceptions (e.g., Huth et al., 2012), how LPFC may support perception in humans remains understudied.
To fill in this gap, we built encoding models to relate visual features extracted from a deep neural network to predict brain activities in LPFC. We then contrasted the tuning profiles of LPFC for visual stimuli to those of the visual cortex. Strikingly, we found that the degrees of individual variability were higher in LPFC. That is, the stimuli that drove the maximal overall LPFC varied highly across individuals.

Methods:

Results presented here are based on data from the Natural Scenes Dataset (Allen et al., 2022). Each of the 8 subjects viewed 9209~10,000 unique scene images while being scanned at 7T and performing a recognition memory task.
To build encoding models of LPFC within each subject (see Figure 1A), for the presented images, we first extracted activations from the image encoder of a CLIP network (ViT-B/32 backbone; Radford et al., 2021). We divided the images into a training set and a test set. Separately for each vertex in a liberal LPFC mask (see the purple contours in Figure 1C), we built a ridge regression model to map the CLIP-img features to the single-trial beta estimates evoked by the training images. We only retained the top 10% vertices in terms of the explained variance from the regression models within the training set. For each of these selected vertices, we then tested the performance of the model in the held-out images. Performance was assessed as the Pearson's r between predicted vs. observed responses to the held-out images.
Focusing on vertices within LPFC or the ventral visual stream (VVS) with cross-validated Pearson's r > 0.1, we built subject-specific encoding models to predict the average responses in these predictable vertices. We then screened all 73000 NSD images through these encoding models of LPFC/VVS activity. To quantify individual differences, we correlated the predicted responses for all images in LPFC/VVS across individuals.
Supporting Image: ModelPerformace_OHBM.png
 

Results:

Across all 8 subjects, we observed robust prediction of LPFC in the held-out data (Figure 1B; all ps < .001 based on 10000 bootstrapping iterations). Figure 1C shows the anatomical distribution of prediction performance from 4 subjects. Figure 1D shows the 10 parcels (defined in Glasser et al., 2016) with the largest average proportion of visually-sensitive vertices (cross-validated r > 0.1).
Shown in Figure 2A are the 5 images predicted to evoke the largest responses in the LPFC of 4 subjects. Inspection of these LPFC-activating images reveals a striking variance across individuals: each subject's LPFC seems to prefer a different kind of stimuli. Quantitively, the correlations of the predicted responses across individuals are lower in LPFC than in VVS (Figure 2B; Wilcoxon signed-rank test p < .001). To demonstrate that the observed pattern is not specific to NSD, we replicated the result with a larger independent image set (Figure 2C; Wilcoxon signed-rank test p < .001; Ecoset; Mehrer et al., 2021).
Supporting Image: Ind_diff_OHBM.png
 

Conclusions:

Our study demonstrates the under-appreciated role of LPFC in visual processing and suggests that LPFC may underlie the idiosyncrasies in how different individuals experience the visual world. Methodologically, these findings may also explain why previous group studies have often failed to observe robust visual functions in LPFC, as subjects' responses may need to be calibrated individually.

Novel Imaging Acquisition Methods:

BOLD fMRI 2

Perception, Attention and Motor Behavior:

Perception: Visual 1
Perception and Attention Other

Keywords:

Computational Neuroscience
Cortex
FUNCTIONAL MRI
HIGH FIELD MR
Machine Learning
Perception
Vision

1|2Indicates the priority used for review

Provide references using author date format

Allen, E. J., St-Yves, G., Wu, Y., Breedlove, J. L., Prince, J. S., Dowdle, L. T., . . . Kay, K. (2022). A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence. Nature Neuroscience, 25(1), 116–126.
Glasser, M. F., Coalson, T. S., Robinson, E. C., Hacker, C. D., Harwell, J., Yacoub, E., . . . Van Essen, D.C. (2016). A multi-modal parcellation of human cerebral cortex. Nature, 536(7615), 171–178.
Haile, T. M., Bohon, K. S., Romero, M. C., & Conway, B. R. (2019). Visual stimulus-driven functional organization of macaque prefrontal cortex. NeuroImage, 188, 427–444.
Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2012). A continuous semantic space describes the representation of thousands of object and action categories across the human brain. Neuron, 76(6), 1210–1224.
Kar, K., & DiCarlo, J. J. (2021). Fast recurrent processing via ventrolateral prefrontal cortex is needed by the primate ventral stream for robust core visual object recognition. Neuron, 109(1), 164–176.e5.
Mehrer, J., Spoerer, C. J., Jones, E. C., Kriegeskorte, N., & Kietzmann, T. C. (2021). An ecologically motivated image dataset for deep learning yields better models of human vision. Proceedings of the National Academy of Sciences, 118(8), e2011417118.
Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., ... & Sutskever, I. (2021, July). Learning transferable visual models from natural language supervision. In International Conference on Machine Learning (pp. 8748-8763). PMLR.
Riley, M. R., Qi, X.-L., & Constantinidis, C. (2017). Functional specialization of areas along the anterior–posterior axis of the primate prefrontal cortex. Cerebral Cortex, 27(7), 3683–3697.
Tsao, D. Y., Schweers, N., Moeller, S., & Freiwald, W. A. (2008). Patches of face-selective cortex in the macaque frontal lobe. Nature Neuroscience, 11(8), 877–879.
Young, M. P. (1992). Objective analysis of the topological organization of the primate cortical visual system. Nature, 358(6382), 152–155.