Poster No:
1147
Submission Type:
Abstract Submission
Authors:
mingli zhang1, Paule Toussaint2, Alan Evans3
Institutions:
1mcgill university, brossard, quebec, 2McGill University, Montreal, QC, 3McGill University, Montreal, Quebec
First Author:
Co-Author(s):
Introduction:
Frame rate enhancement, and detailed reconstruction of 3D datasets. Among their many applications, interpolation methods have proven particularly valuable in neuroimaging for replacing missing data. This problem becomes [especially] critical when dealing with ultra-high (cellular) resolution brain images. One such application is with the BigBrain, a 3D reconstruction of 7404 histological brain sections at 20-micrometer isotropic resolution [1]. The original sections of the BigBrain were re-scanned at 1 micron in-plane, and images were captured through selected slices at different depths with an optical microscopy technique. To deal with the large size of the data, slices were divided into patches which were aligned and stacked to form 6x6x6mm^3 volumes at 1-micron isotropic resolution.
In this work, we present a novel, deep learning-based method for near-duplicate image synthesis[2,3,4] with bi-directional Flows of Feature Pyramid (FFP)[4] and Adaptive Feature Learning (AFL)[2] algorithm designed to replace missing data and create seamless and smooth 3D blocks of 1-micron isotropic BigBrain.
Methods:
Two 6x6x6mm^3 blocks of the BigBrain were downloaded from EBRAINS (https://ebrains.eu/): one at 2-micron (25 GB), and the other at 8-micron isotropic as a proof of concept. All codes were run on two NVIDIA GeForce GTX 1080 GPUs, CUDA Version is 12.4.
We adopt a multi-scale feature extractor based on a feature pyramid architecture to accurately model motions of varying magnitudes, ranging from subtle movements to large-scale displacements. Built upon this, a scale-agnostic bi-directional motion estimator is employed to effectively handle both small and large movement.
To ensure visually coherent synthesis, we integrate Gram Loss, Gradient Loss, and Perceptual Loss into the optimization process. Gram Loss facilitates global texture preservation, Gradient Loss retains local edge details, and Perceptual Loss emphasizes textures and overall appearance.
Additionally, adaptive loss functions are introduced to focus on high-frequency or critical regions, providing flexibility during optimization. This adaptability improves robustness and enhances generalization and performance across diverse scenarios.
Results:
Using the motion estimator leverages the feature pyramid to align frames by predicting motion vectors at multiple scales, and corrects for any residual motion in the 3D blocks (Figure 1). The use of a combination of losses enhances the quality and consistency of the interpolated frames to yield a smooth transition between frames (Figure 2).

·Figure 1. Motion correction algorithm applied to a 1-micron resolution image patch of the BigBrain.

·Figure 2. Generated images as a replacement for two missing frames in a 3D 8-micron block of BigBrain. Images to the left and right of the dark patches were used as input in our interpolation framewor
Conclusions:
Our proposed solution integrates multi-scale feature extraction, scale-agnostic motion estimation, gradient features, and adaptive feature norm to achieve high-quality interpolation results. By incorporating perceptual and style losses alongside a gradient total variation loss, the framework demonstrates robust performance in preserving both structural integrity and fine-grained details. Evaluations on 8-, 2-, and 1-micron BigBrain patches confirm the effectiveness of our approach in generating continuous frames within sample blocks towards building a seamless ultra-high resolution 1-micron BigBrain.
Modeling and Analysis Methods:
Classification and Predictive Modeling 1
Image Registration and Computational Anatomy 2
Methods Development
Neuroinformatics and Data Sharing:
Brain Atlases
Keywords:
Atlasing
Computing
Data analysis
Modeling
1|2Indicates the priority used for review
By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute your work in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels, the OHBM website, or other electronic publications and media.
I accept
The Open Science Special Interest Group (OSSIG) is introducing a reproducibility challenge for OHBM 2025. This new initiative aims to enhance the reproducibility of scientific results and foster collaborations between labs. Teams will consist of a “source” party and a “reproducing” party, and will be evaluated on the success of their replication, the openness of the source work, and additional deliverables. Click here for more information.
Propose your OHBM abstract(s) as source work for future OHBM meetings by selecting one of the following options:
I do not want to participate in the reproducibility challenge.
Please indicate below if your study was a "resting state" or "task-activation” study.
Other
Healthy subjects only or patients (note that patient studies may also involve healthy subjects):
Healthy subjects
Was this research conducted in the United States?
No
Were any human subjects research approved by the relevant Institutional Review Board or ethics panel?
NOTE: Any human subjects studies without IRB approval will be automatically rejected.
No
Were any animal research approved by the relevant IACUC or other animal research panel?
NOTE: Any animal studies without IACUC approval will be automatically rejected.
No
Please indicate which methods were used in your research:
Computational modeling
For human MRI, what field strength scanner do you use?
If Other, please list
Which processing packages did you use for your study?
Other, Please list
Provide references using APA citation style.
1. Amunts, K., Lepage, C., Borgeat, L., Mohlberg, H., Dickscheid, T., Rousseau, M. É., ... & Evans, A. C. (2013).BigBrain: an ultrahigh-resolution 3D human brain model. science, 340(6139), 1472-1475.
2. Reda, F., Kontkanen, J., Tabellion, E., Sun, D., Pantofaru, C., & Curless, B. (2022, October). Film: Frameinterpolation for large motion. In European Conference on Computer Vision (pp. 250-266). Cham: SpringerNature Switzerland.
3. Sun, L., Gehrig, D., Sakaridis, C., Gehrig, M., Liang, J., Sun, P., ... & Scaramuzza, D. (2024). A UnifiedFramework for Event-based Frame Interpolation with Ad-hoc Deblurring in the Wild. IEEE Transactions onPattern Analysis and Machine Intelligence.
4. Jin, X., Wu, L., Shen, G., Chen, Y., Chen, J., Koo, J., & Hahm, C. H. (2023). Enhanced bi-directional motionestimation for video frame interpolation. In Proceedings of the IEEE/CVF Winter Conference on Applications ofComputer Vision (pp. 5049-5057).
Yes
Please select the country that the first author on this abstract resides and works in from the UNESCO Institute of Statistics and World Bank List of Low and Middle Income Countries (based on gross national income per capita).
Cambodia