Tracking BOLD traveling waves of different metrical structures during wind playing and speaking

Poster No:

775 

Submission Type:

Abstract Submission 

Authors:

Ut Meng Lei1,2,3, Victoria Lai Cheng Lei1,2,3, Katy Ieong Cheng HO WEATHERLY1,4, Defeng Li1,2,3, Ruey-Song Huang1,2,5

Institutions:

1University of Macau, Macau, China, 2Centre for Cognitive and Brain Sciences, University of Macau, Macau, China, 3Faculty of Arts and Humanities, University of Macau, Macau, China, 4Faculty of Education, University of Macau, Macau, China, 5Faculty of Science and Technology, University of Macau, Macau, China

First Author:

Ut Meng Lei  
University of Macau|Centre for Cognitive and Brain Sciences, University of Macau|Faculty of Arts and Humanities, University of Macau
Macau, China|Macau, China|Macau, China

Co-Author(s):

Victoria Lai Cheng Lei  
University of Macau|Centre for Cognitive and Brain Sciences, University of Macau|Faculty of Arts and Humanities, University of Macau
Macau, China|Macau, China|Macau, China
Katy Ieong Cheng HO WEATHERLY  
University of Macau|Faculty of Education, University of Macau
Macau, China|Macau, China
Defeng Li  
University of Macau|Centre for Cognitive and Brain Sciences, University of Macau|Faculty of Arts and Humanities, University of Macau
Macau, China|Macau, China|Macau, China
Ruey-Song Huang  
University of Macau|Centre for Cognitive and Brain Sciences, University of Macau|Faculty of Science and Technology, University of Macau
Macau, China|Macau, China|Macau, China

Introduction:

Despite efforts to study music and language processing, the neural commonalities and distinctions between wind instrument playing and speech production-both involving the vocal tract-remain unclear. Prior research on musicians playing keyboard and string instruments highlighted brain activations related to bimanual coordination, but wind instrument and speech production studies are limited due to challenges in vocal production during neuroimaging experiments. This study used rapid phase-encoded fMRI to capture high-resolution brain dynamics and information flow during overt music and language tasks with different metrical structures. Subjects played musical phrases and spoke sentences with minimal head movement while hearing their productions using noise reduction techniques.

Methods:

Thirty Cantonese-speaking wind players were recruited (Age: M = 22.1, SD = 5.92). All of them spoke English as their second language (self-reported proficiency score in Language History Questionnaire: M = 0.6 [very good], SD = 0.03). They were all right-handed and had normal to correct-to-normal vision. In each fMRI session, the subjects participated in music (wind instrument playing) or language (speaking) tasks inside the MRI scanner. Subjects read musical notations or English sentences with different metrical structures on an LCD screen, and simultaneously produced the targets in three conditions: (1) baseline: reproduce musical phrases (using a plastic recorder) with no accent symbols, or read aloud English sentences with no accent symbols; (2) congruent: reproduce musical phrases with accent symbols placed on the downbeat positions, or read aloud English sentences with accent symbols placed on positions congruent with natural utterances; (3) incongruent: reproduce musical phrases with some accent symbols placed on the upbeat positions, or read aloud English sentences with some accent symbols placed on positions incongruent with natural utterances. Each music or language condition was repeated twice in two 256-s scans, resulting in a total of 12 scans per session. Functional scans were acquired using an echo planar imaging (EPI) sequence (55 axial images, matrix size = 64x64, voxel size = 3x3x3 mm, TR = 1 s). Functional images were analyzed with Fourier transform, and the signal amplitudes and phases at the task frequency (16 cycles/scan) were displayed on cortical surfaces reconstructed from structural images aligned with functional images. A traveling-wave movie was created for tracking the streams of information flow during each task. Surge profiles were obtained by computing the distribution of activation phases across vertices in each hemisphere and in selected surface-based regions of interest (sROI).

Results:

Music and language tasks activated overlapping motor and auditory regions bilaterally. In both domains, baseline activation was the quickest and lowest, while congruent conditions had greater activation and longer delay. In incongruent conditions, with unpredictable metrical structures, brain activation was the highest and most delayed. Music tasks showed smaller activation differences between congruent and incongruent conditions than language tasks. Moreover, speaking activated higher-level auditory cortex, STS, and SMA more extensively than wind instrument playing. Both domains exhibited left-dominant activations and shared streams of traveling waves in visual, auditory, and motor cortices.

Conclusions:

Using rapid phase-encoded fMRI with high spatial and temporal resolution, we captured the information flow during overt wind instrument playing and speaking. Results show cortical overlaps and shared traveling waves between music and language domains when processing sequences with different metrical structures. Activation delays were the longest in incongruent, intermediate in congruent, and the shortest in baseline condition. Findings suggest potential rhythm skills transfer between music and language, supported by cross-domain neural overlaps.

Higher Cognitive Functions:

Music 1

Language:

Language Acquisition
Speech Production 2

Novel Imaging Acquisition Methods:

BOLD fMRI

Keywords:

Cognition
Cortex
FUNCTIONAL MRI
Language
Motor
Other - Music

1|2Indicates the priority used for review

Abstract Information

By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.

I accept

The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information. Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:

I am submitting this abstract as an original work to be reproduced. I am available to be the “source party” in an upcoming team and consent to have this work listed on the OSSIG website. I agree to be contacted by OSSIG regarding the challenge and may share data used in this abstract with another team.

Please indicate below if your study was a "resting state" or "task-activation” study.

Task-activation

Healthy subjects only or patients (note that patient studies may also involve healthy subjects):

Healthy subjects

Was this research conducted in the United States?

No

Were any human subjects research approved by the relevant Institutional Review Board or ethics panel? NOTE: Any human subjects studies without IRB approval will be automatically rejected.

Yes

Were any animal research approved by the relevant IACUC or other animal research panel? NOTE: Any animal studies without IACUC approval will be automatically rejected.

Not applicable

Please indicate which methods were used in your research:

Functional MRI

For human MRI, what field strength scanner do you use?

3.0T

Which processing packages did you use for your study?

Free Surfer

Provide references using APA citation style.

Alagoz, G., et al. (2024). The shared genetic architecture and evolution of human language and musical rhythm. Nature Human Behaviour.

Baumann, S., et al. (2007). A network for audio-motor coordination in skilled pianists and non-musicians. Brain Research, 1161, 65-78.

Chen, C. F., et al. (2019). Unraveling the spatiotemporal brain dynamics during a simulated reach-to-eat task. Neuroimage, 185, 58-71

Engel, S. A. (2012). The development and use of phase-encoded functional MRI designs. Neuroimage, 62(2), 1195-1200.

Fiveash, A., et al. (2021). Processing rhythm in speech and music: Shared mechanisms and implications for developmental speech and language disorders. Neuropsychology, 35(8), 771-791.

Goswami, U. (2022). Language acquisition and speech rhythm patterns: an auditory neuroscience perspective. Royal Society Open Science, 9(7), 211855.

Lei, V. L. C., et al. (2024). Phase-encoded fMRI tracks down brainstorms of natural language processing with subsecond precision. Human Brain Mapping, 45(2), e26617.

Pa, J., & Hickok, G. (2008). A parietal-temporal sensory-motor integration area for the human vocal tract: Evidence from an fMRI study of skilled musicians. Neuropsychologia, 46(1), 362-368.

Vuust, P., et al. (2005). To musicians, the message is in the meter pre-attentive neuronal responses to incongruent rhythm are left-lateralized in musicians. Neuroimage, 24(2), 560-564.

Vuust, P., et al. (2006). It don't mean a thing…: Keeping the rhythm during polyrhythmic tension, activates language areas (BA47). Neuroimage, 31(2), 832-841.

UNESCO Institute of Statistics and World Bank Waiver Form

I attest that I currently live, work, or study in a country on the UNESCO Institute of Statistics and World Bank List of Low and Middle Income Countries list provided.

No