Wednesday, Jun 26: 11:30 AM - 12:45 PM
Oral Sessions
COEX
Room: Grand Ballroom 104-105
Presentations
Sitting at the apex of the cortical hierarchy (Young, 1992), the lateral prefrontal cortex (LPFC) is most commonly associated with complex and high-level cognition. Does LPFC also play a role in visual perception?
Evidence from nonhuman primates has characterized the representation of visual information in LPFC, even when there is no cognitive task involved (Riley et al., 2017; Tsao et al., 2008; Haile et al., 2019). In addition, feedback signals from LPFC seem necessary for recognizing objects under challenging conditions (Kar & DiCarlo, 2021). Yet to date, with few exceptions (e.g., Huth et al., 2012), how LPFC may support perception in humans remains understudied.
To fill in this gap, we built encoding models to relate visual features extracted from a deep neural network to predict brain activities in LPFC. We then contrasted the tuning profiles of LPFC for visual stimuli to those of the visual cortex. Strikingly, we found that the degrees of individual variability were higher in LPFC. That is, the stimuli that drove the maximal overall LPFC varied highly across individuals.
Abstracts
Presenter
Qi Lin, RIKEN
Center for Brain Science
Wako, Saitama
Japan
Introduction Information processing operations in the visual cortex are tuned to the statistical regularities of sensory inputs and are crucially dependent on context. Neurobiologically inspired computational frameworks of visual processing emphasise functional interactions between higher and lower cortical areas, whereby higher areas send feedback signals that influence feedforward processing in lower areas. Using a partial visual occlusion approach in which a mask covers the lower right quadrant of natural scene images, we can isolate feedback signals in the retinotopic visual cortex that processes the occluded image portion (Muckli et al., 2015, Morgan 2019, Muckli 2023). Based on our earlier findings using apparent motion stimulation where feedback signals suppress predictable sensory inputs (Alink et al., 2010), we hypothesised that a priming contextual scene would increase the response to subsequent unpredictable sensory information, while it would reduce or stabilise the response to consistent, expected sensory information.
Abstracts
Presenter
Zirui Zhang, University of Glasgow
School of Psychology and Neuroscience
GLASGOW, Scotland
United Kingdom
Cortical columns are functionally distinct units of the cerebral cortex that are organized perpendicular to cortical layers and typically process similar types of information (e.g., orientation preference). The extensive identification of cortical columns across the cortex has led some to propose that columnar organization might be a fundamental principle of the entire neocortex. However, evidence for such columnar organization in the human primary somatosensory cortex (S1) remains limited (see [1, 2]).
Columnar organization presumed to relate to slowly adapting (SA) and rapidly adapting (RA) receptors, has been identified in primate SI [3,4]. Recent studies, however, suggest that S1 neurons are organized by feature selectivity (e.g., shape, movement, vibration) rather than strictly by receptor type [5]. Despite this, there may still be columnar organization in human S1 related to processing specific frequencies, such as 3Hz (SA, intermittent pressure) and 30Hz (RA, vibration), which merits further research [6].
Here, we seek to adapt approaches proven effective in identifying visual columns (e.g., ocular dominance) using UHF fMRI to probe for SI columnarity.
Abstracts
Our visual system actively gathers information from the environment to facilitate actions aligned with behavioral goals, and reward information plays a significant role in connecting sensory inputs with optimal actions. While previous animal studies have demonstrated that visually responsive brain regions, ranging from the primary visual cortex (V1) [1,2] to the frontal eye field (FEF) [3], show sensitivity to reward values within their receptive fields [4,5], the mechanism through which potential rewards modulate visual representations during goal-directed actions in dynamic naturalistic settings remains poorly understood, particularly in humans. To address this gap, we introduced an innovative Minecraft-based 3D interactive task, where participants strategically plan to achieve goals while navigating a virtual world. We hypothesized that rewards would elicit spatially specific responses, prioritizing the processing of important stimuli and supporting efficient visually guided actions towards them.
Abstracts
Presenter
Royoung Kim, Sungkyunkwan University Suwon, Gyeonggi-do
Korea, Republic of
The occipitotemporal cortex (OTC) contains areas that preferentially respond to specific object categories like faces, body parts, and tools, with a mirrored organization across its lateral and ventral parts (Kanwisher, 2010; Taylor & Downing, 2011). However, the exact role of these areas in supporting action or object recognition and the dimensions they represent are still debated (Bracci & Op de Beeck, 2023; Peelen & Downing, 2017). Here, we investigate the role of the action dimension as one of the possible organizing principles of object space in visual cortex, by investigating the selectivity and the multivariate representations of category-selective clusters.
Abstracts
Mapping of retinotopic and category-selective regions in individual human brains using fMRI has become a routine task across multiple labs. As retinotopic and category-selective regions are selective to different properties of visual stimuli, two distinct experiments are conducted to map these brain regions. For example, traveling-wave stimuli with bars (Dumoulin and Wandell, 2008; Benson et al., 2018; Finzi et al., 2021; Kim et al., 2023) or with wedges and rings (Engel et al., 1997; Benson et al., 2018) are typically used to map population receptive fields (pRFs, Dumoulin and Wandell, 2008), identify visual field maps, and delineate borders of retinotopic visual regions (V1, V2, V3, hV4, VO, LO, TO, V3ab, and IPS). Whereas, functional localizer experiments (Kanwisher et al., 1997; Stigliani et al., 2015) using various categorical images of faces, bodies, scenes, and objects are used to define category-selective regions (mFus-faces, pFus-faces, mOTS-words, pOTS-words, OTS-bodies, CoS-places). Here, we developed a method that generates optimal stimuli for simultaneously mapping retinotopic and category-selective regions in a single fMRI experiment.
Abstracts
Presenter
Insub Kim, Stanford University Stanford, CA
United States