Movie-viewing as a clinical tool: Integrating emotional valence, gaze, and contextual language tasks

Poster No:

805 

Submission Type:

Abstract Submission 

Authors:

Manuel Marte1,2, Bryce Gillis2, Rowan Faris2, Colin Galvin3, Laura Rigolo3, Yanmei Tie3, Swathi Kiran1, Einat Liebenthal4

Institutions:

1Center for Brain Recovery, Boston University, Boston, MA, 2Institute for Technology in Psychiatry, McLean Hospital, Harvard Medical School, Belmont, MA, 3Department of Neurosurgery, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, 4Institute for Technology in Psychiatry, McLean Hospital, Harvard Medical School, Boston, MA

First Author:

Manuel Marte  
Center for Brain Recovery, Boston University|Institute for Technology in Psychiatry, McLean Hospital, Harvard Medical School
Boston, MA|Belmont, MA

Co-Author(s):

Bryce Gillis  
Institute for Technology in Psychiatry, McLean Hospital, Harvard Medical School
Belmont, MA
Rowan Faris  
Institute for Technology in Psychiatry, McLean Hospital, Harvard Medical School
Belmont, MA
Colin Galvin  
Department of Neurosurgery, Brigham and Women’s Hospital, Harvard Medical School
Boston, MA
Laura Rigolo  
Department of Neurosurgery, Brigham and Women’s Hospital, Harvard Medical School
Boston, MA
Yanmei Tie  
Department of Neurosurgery, Brigham and Women’s Hospital, Harvard Medical School
Boston, MA
Swathi Kiran  
Center for Brain Recovery, Boston University
Boston, MA
Einat Liebenthal  
Institute for Technology in Psychiatry, McLean Hospital, Harvard Medical School
Boston, MA

Introduction:

Naturalistic paradigms offer unique insights into real-world processing that may not be captured by traditional assessments. We investigated whether a movie-viewing paradigm, combining continuous emotional valence ratings with context-specific language tasks, could differentiate between healthy controls (HC, n=50), persons with aphasia (AP, n=30), and individuals with mild cognitive impairment (MCI, n=17). A subset of participants completed eye-tracking during movie-viewing (HC: n=50; AP: n=7; MCI: n=17).

Methods:

Participants watched emotionally engaging movie clips while providing continuous valence ratings (-4 to +4), followed by comprehension questions and antonym generation tasks. We examined: (1) performance on post-movie language tasks, (2) rating deviation from group consensus using Root Mean Squared Z-scores, (3) rating complexity across timescales using multiscale sample entropy, and (4) the diagnostic utility of these measures using logistic LASSO classification models incorporating all features and their pairwise interactions. In an exploratory analysis, we examined inter-subject correlations (ISC) of eye-gaze patterns both horizontally and vertically in the subset of participants who completed eye-tracking.

Results:

HC outperformed both clinical groups on language tasks (p < 0.05), with MCI showing an intermediate profile - performing better than AP on antonym generation (p < 0.001) but similarly on comprehension (p = 0.067). AP showed significantly greater deviations from group consensus in valence ratings compared to both HC and MCI (p ≥ 0.004), while HC and MCI showed similar patterns. Complexity analysis demonstrated systematic differences across groups, with APs showing a steeper decline in entropy at longer timescales (p < 0.001), and MCIs exhibiting an intermediate profile, relative to HCs. Preliminary eye-gaze metrics revealed no significant group differences, but comprehension scores in AP showed a trending relationship with horizontal gaze synchronization (p = 0.06).

Classification models achieved strong performance distinguishing APs from HCs (AUC = 0.928, 95% CI: 0.877-0.989) and good performance for the combined patient groups versus HCs (AUC = 0.859, 95% CI: 0.784-0.934). Performance was more moderate for HC versus MCI classification (AUC = 0.721, 95% CI: 0.572-0.879). Post-movie language task performance and continuous movie valence ratings emerged as the strongest predictors of group membership, with sample entropy providing additional predictive value.
Supporting Image: OHBM_3PanelFig.png
 

Conclusions:

Our findings demonstrate that continuous valence ratings during movie-viewing and post-movie language performance can effectively characterize cognitive-linguistic impairments. The paradigm's sensitivity to both marked (AP) and subtle (MCI) impairments validates its potential as an ecologically valid assessment tool. Future analyses of eye-tracking data will examine fine-grained measures, including spatiotemporal fixation density distributions, gaze entropy metrics, and scene-specific gaze behaviors to potentially reveal subtle group differences in visual attention allocation and exploration strategies to contrast against global synchronization measures. These results suggest that naturalistic paradigms not only offer robust diagnostic value but could also inform the development of targeted interventions for different clinical populations.

Disorders of the Nervous System:

Neurodegenerative/ Late Life (eg. Parkinson’s, Alzheimer’s) 2

Emotion, Motivation and Social Neuroscience:

Emotional Perception

Language:

Language Comprehension and Semantics 1

Lifespan Development:

Aging

Modeling and Analysis Methods:

Classification and Predictive Modeling

Keywords:

Aphasia
Cognition
DISORDERS
Emotions
Language

1|2Indicates the priority used for review

Abstract Information

By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.

I accept

The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information. Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:

I am submitting this abstract as an original work to be reproduced. I am available to be the “source party” in an upcoming team and consent to have this work listed on the OSSIG website. I agree to be contacted by OSSIG regarding the challenge and may share data used in this abstract with another team.

Please indicate below if your study was a "resting state" or "task-activation” study.

Other

Healthy subjects only or patients (note that patient studies may also involve healthy subjects):

Patients

Was this research conducted in the United States?

Yes

Are you Internal Review Board (IRB) certified? Please note: Failure to have IRB, if applicable will lead to automatic rejection of abstract.

Yes, I have IRB or AUCC approval

Were any human subjects research approved by the relevant Institutional Review Board or ethics panel? NOTE: Any human subjects studies without IRB approval will be automatically rejected.

Yes

Were any animal research approved by the relevant IACUC or other animal research panel? NOTE: Any animal studies without IACUC approval will be automatically rejected.

Not applicable

Please indicate which methods were used in your research:

Neurophysiology
Behavior
Neuropsychological testing
Other, Please specify  -   Eye-gaze

Provide references using APA citation style.

not applicable.

UNESCO Institute of Statistics and World Bank Waiver Form

I attest that I currently live, work, or study in a country on the UNESCO Institute of Statistics and World Bank List of Low and Middle Income Countries list provided.

No