Integrative Biomedical Research

Integrative Biomedical Research (Journal of Angiotherapy) | Online ISSN  3068-6326
558
Citations
1.1m
Views
720
Articles
RESEARCH ARTICLE   (Open Access)

Deep Mapping of Neuro-phenotypes Using AI-Enhanced MRI: A Contrast-Based Multiplanar Analysis

Moin Uddin Patwary1*, Tanjila Hasan Tulon1, Shamim Al Mamun1, Md. Mortuza Galib1, Ezaz Ahmad Shah1, Antu Das1, Rudro kumar Saha1, Sabrina Sultana Sneha1, Arobi Akter Mim1, Jinia1

+ Author Affiliations

Integrative Biomedical Research 9 (1) 1-8 https://doi.org/10.25163/biomedical.9110528

Submitted: 29 September 2025 Revised: 22 November 2025  Published: 30 November 2025 


Abstract

Background: Magnetic Resonance Imaging, or MRI for short, is the workhorse of brain morphometry. However, conventional T1-weighted and T2-weighted scans lack the sensitivity at the voxel level to enable neuro-phenotyping or the early detection of pathologies. This limitation has driven the introduction of artificial intelligence (AI) to enhance internal imaging.

Methods: We got MRI data on 828 participants (aged 5-21) from an open-access dataset. Next, the images underwent some preprocessing with skull stripping, bias field correction, intensity normalization, and finally CLAHE for contrast enhancement. After that, the team applied AI-driven voxel-wise intensity mapping to generate improved heatmaps across axial, coronal, and sagittal plane Voxel intensities of specific regions-of-interest (ROIs) were extracted and compared using one-way ANOVA, which includes original, contrast-enhanced and AI-processed images.

Results: Upon analysis, AI-generated imaging produced a better visualization of the cortical borders, subcortical regions and central structures. By increasing the contrast, some asymmetries and intensity gradients become visible. This is particularly true for the anterior-posterior axis. The voxel intensity across groups differed significantly on the axial (F (2,12) = 41.56, p = 4.03×10-6), coronal (F (2,12) = 50.72, p = 1.40×10-6) and sagittal (F (2,12) = 57.76, p = 6.95×10-7) planes, illustrating the added value of the enhancement pipeline.

Conclusion: This research confirms that the use of AI image enhancement of MRI yields statistically proven improvements in anatomy at the voxel level. The pipeline suggested is a high precision toolbox for neuro-phenotyping and automating the brain shape analysis for researcher in developmental and clinical settings.

Keywords: AI-enhanced MRI, Voxel-based analysis, Brain phenotyping, Contrast mapping, T2-weighted imaging

1. Introduction

MRI has become one of the powerful noninvasive techniques to map the structural and functional architecture of human brain. With high spatial resolution and the absence of ionizing radiation, MRI makes possible the in vivo examination of several features of GM, WM, and CSF during development at the neuro-anatomical level (Lerch et al., 2017). In particular, T1-weighted and T2-weighted MRI are frequently used in the study of neurodevelopment, emotional regulation and neuropsychiatric disorders in children and adolescents (Norbom et al., 2020). Classic MRI visualizations do not capture minute form differences like voxel-level intensity asymmetry or hemispherical variation that could be useful for more complex phenotyping or diagnostic assessments (Zhang et al., 2024).

Advances in AI and image processing are helping overcome limitations in organogenesis. Histograms equalization and Contrast Limited Adaptive Histogram Equalization (CLAHE) are some of the contrast enhancement techniques that have shown to improve visibility of tissue boundaries in structural MRI data (Saifullah et al., 2024). In addition, AI-based voxel-wise transformation and color-coded heatmaps allow researchers to visualize the intensity-based variation across the cortical and subcortical areas at the microstructural level (Bacon et al., 2024). These methods assist in revealing concealed patterns and minute variations in a manner that are not visible within basic grayscale images. This makes them valuable in the field of neuro-phenotyping, disease prediction, and gene-brain mapping (Neier et al., 2021).

Recently, large-scale, neuroimaging projects have made massive public datasets available to the research community. The Human Connectome Project and OpenNeuro are examples. One such dataset is the Emotion and Development Branch Phenotyping and DTI (2012-2017) dataset available on OpenNeuro, which contains MRIs of over 828 subjects aged 5-21. High-resolution T1w and T2w scans are part of each subject's dataset. An imaging dataset of this kind that is open-access, demographically diverse presents an unprecedented opportunity to study developmental trajectories and cognitive-affective processing and neurostructural abnormalities at population level (Cameron et al., 2024).

A considerable proportion of studies might employ qualitative, superficial image enhancement by their judgments but a very small number applies quantitative image enhancement or fills their work with actual statistical and other comparisons with original and processed images (Agripnidis et al., 2025). We present a reproducible pipeline for enhancing MRI images and performing voxel-wise analysis in this study. The system consists of preprocessing (bias correction and skull stripping), contrast enhancement (CLAHE), AI-based voxel-wise transformation (heatmap generation), and statistical validation. The aim is to evaluate if structural representations augmented with AI provide a statistically relevant advantage over traditional imaging (Khalifa & Albadawy, 2024).

To find out this, the voxel intensity values were taken from the three planes, that is, axial, coronal, and sagittal, per the three conditions, original A/B/C, contrast A1/B1/C1, and AI based A2/B2/C2.  They based the region of interest (ROI) definitions on standard atlas. We performed one-way ANOVA on each plane to determine whether the differences in intensity in each condition were significant. We hypothesis that AI-processed images, upon voxel validation, should demonstrate a greater degree of anatomical contrast. This improved contrast can be assessed qualitatively and quantitatively through statistical testing (Kim et al., 2024; Sun & Ng, 2022).

Our novel processing, visualization, and statistical assessment methodology differs from studies based solely on visual interpretation of enhanced images. This integration guarantees that enhancements are both visually appealing and analytically valid. The overarching goal of this research is to establish a powerful mechanism for downstream applications in brain morphometry, computational phenotyping, and gene-brain-behavior studies.

This study provides the first statistically grounded demonstration of AI-based improvements in voxel-level visibility, laying the groundwork for more accurate and interpretable neuroimaging workflows. Pipelines like these can help with diagnosing, personalized brain mapping, and other methods in cognitive and clinical neuroscience.

2. Materials and Methods

2.1 Dataset Description

The publicly available OpenNeuro database in one of the datasests utilized in this work, specifically the study “Emotion and Development Branch Phenotyping and DTI (2012-2017),” was the source of the data employed in the current investigation. MRI data was collected from 828 participants between 5 and 21 years of age. MRI scans are T1-weighted and T2-weighted for each subject respectively. The 3T Siemens TrioTim scanner was used to collect all scans at voxel resolution of 1.0 × 1.0 × 1.0 mm3 with a slice thickness of 1 mm (Cameron et al., 2024).

2.2 Preprocessing Pipeline

A set of open-source neuroimaging tools (e.g. FSL, ANTs) was used along with a custom-made pipeline built on Python to preprocess the data. This process has started with reorientation and N4 bias field correction for intensity inhomogeneity. We turned off everything, which helps correct for head movement effects. The scans were aligned into MNI152 standard space using affine and non-linear registration techniques. Finally, the voxel intensities were normalized using min-max scaling to ensure that all images had a consistent intensity scale for further analysis (Pascucci et al., 2022).

2.3 Image Enhancement & AI-Based Processing

After preprocessing, contrast enhancement techniques were deployed for better structural visibility in the MRI images. The standard histogram equalization and Contrast Limited Adaptive Histogram Equalization (CLAHE) enhance the boundaries of grey-white matter. Afterwards, a voxel-wise intensity mapping method based on AI was used. The intensities of the voxels were color-coded to generate a statistical heat map that gave the spatial variation. To achieve an intensity scale map, gradient-based colormaps were applied to grayscale images, producing distinct and vivid results. These better images had more conferred contrast and had been optimized for downstream phenotyping and structural variability analysis (Liu et al., 2023; Singh et al., 2018).

2.4 Region of Interest (ROI) or Voxel Analysis

After image processing, voxel-wise analysis in each region was carried out to quantify the structural variation in the brain. Regions of interest (ROIs) atlas-based automatic segmentation methods such as the MNI or Harvard-Oxford brain atlases (Makris et al., 2023; Rushmore et al., 2021). The voxel intensity values were obtained from each ROI, and statistical parameters such as the mean, median and standard deviation were derived to characterize the regional intensity. Per-image (A, A1, A2, etc.) values were determined to enable direct comparison within the same image stage from various processing stages. The dataset was used for further statistical tests.

2.5 Visualization and Reporting

Nilearn and matplotlib libraries were used to visualize the processed images in 3D and 2D respectively. ITK-SNAP was used to generate ROI-based overlays and segmentation maps. We created voxel-wise heatmaps and plots of intensity distribution to reveal variations across cortical and subcortical regions. Statistical results were structured ANOVA tables. The figures and graph legends that support their interpretation and reproducibility should always be labelled clearly.

2.6 Statistical Analysis

Tests involving one-way ANOVA were performed for each anatomical plane in isolation (axial, coronal, sagittal) to detect overall differences in voxel intensity distributions across the images conditions. We used mean voxel intensity values from either full-brain (FB) region or ROI-specific regions as input metric. A threshold for statistical significance was set at p < 0.05. The SciPy and StatsModels libraries were used, and all analyses were performed using Python-based toolkits.

3. Results

3.1 AI-Augmented T2 Imaging Uncovers Cortical Precision

Figure 1 presents the visual outcome of structural brain imaging in axial, coronal and sagittal view with original T2-weighted examples compared to processed AI scans. The original axial image (Figure 1A) shows a structurally normal brain with well-defined sulci and gyri consistent with the adult norm. Through intensity normalization and contrast adjustment (Figure 1A1), followed by voxel-wise color mapping (Figure 1A2), the image shows better visual separation of white matter, grey matter and cerebrospinal fluid. The cortex is displayed in clearer layered architecture compared to the original grayscale image, where this is not easily seen.

Similar enhancements were observed in the coronal & sagital sections (Figures 1B & 1C). The coronal image depicts a general view of the most medial structures in isolation but with little to no resolution of the subcortical areas (Figure 1B). After processing, asymmetries at the voxel-level become visible, especially in the medial temporal lobe, potentially representing subject-specific morphological difference. The sagittal section also displays considerable improvement (figure 1C), where corpus callosum, brainstem, and cerebellum borders are evident in 1C1 and C2. We have a processed image of the sagittal view indicating a certain anterior–posterior intensity gradient (not clearly seenin original image) that may hint at some structural inhomogeneity along the longitudinal axis.

3.2 Improved Structural Contrast in T1-Weighted Brain Images

Figure 2 shows axial, coronal and sagittal T1-weighted brain scans and their contrast-enhanced and AI versions. The axial image in its original form shows the cortex to be symmetric and structurally normal, particularly in the frontal and occipital regions. The enhanced image (2A1) improves grey-white matter differentiation, the color mapping (2A2) shows the richness of distribution of cortical intensity paving the way for improved understanding of density and regionality not available from the unprocessed black and white image.

The results in the coronal section (Figures 2B-2B2) appear more anatomically detailed. Figure 2B2 can be seen showing better definition of the lateral and medial ventricles. It also shows variations of voxel intensity in the surrounding cortical structures. The presence of this asymmetry, which was absent in the control images, may indicate subject-specific structural variation or a subtle neurodevelopmental difference. This further information provides insight to reveal patterns deeper than gross anatomy.

Improvements observed in the delineation of deep structures shown in the sagittal plane (Figures 2C-2C2). According to Figure 2C, the corpus callosum and midline fissure are clearly identified. Further, Figures 2C1 and C2 enhance the contrast and reveal a pronounced anterior-posterior intensity gradient. Specifically, the brainstem and cerebellum are seen as capable of moving more voxels apart and thus are more spatially segment able. The results indicate that AI-based processing not only enhances anatomical details of T1w images but also delineates subtle regional differences that can be useful for future phenotyping and morphometry.

3.3 Voxel-Level Mapping Reveals Deep Brain Variation

The axial, coronal and sagittal T2-weighted brain scans from the A, B, C are the raw versions, whereas, A1, B1, C1 are the contrast enhanced versions, and A2, B2, C2 are the AI modified versions. The lateral ventricle of the brain defines an envelope in the axial (Figure 3 A).  The altered image (A1) enhances the visibility of tissues, particularly at the grey-white matter boundary. The “symmetrical intensities” in both hemispheres and the refined visualization of cortical density through the voxel-wise AI color mapping (Fig. 3A2) show a fairly good improvement in cortical structure assessment over Fig. 3A1.

The coronal images (Figure 3B-3B2) have been enhanced for increased anatomical clarity after processing. The AI-enhanced heatmap (B2) demonstrates non-negligible intensity variations around ventricular and hypothalamic regions that are visually lost in the grayscale view. Conclusion number 2: This variation might reflect the subject-specific difference or might occur due to some neuroanatomical variability. In the sagittal series (Figure 3C–3C2), contrast and segmentation quality suffer considerable improvement. The AI processed sagittal map (C2) reveals distinct anterior and posterior cortical folds. The voxel level contrast within the cerebellum and brainstem is also clear.

3.4 Voxel-Wise Intensity Shifts Reveal Impact of MRI Enhancement

A one-way ANOVA test was executed to determine whether voxel intensity values from the original (A, B, C), contrast-enhanced (A1, B1, C1) and AI-processed (A2, B2, C2) brain images in axial, coronal and sagittal views varied significantly. There was a significant difference in all comparison sets (Table 1).

All three figures showed a strong effect of processing on deeper processing, F (2,12) = 41.56 (p = 4.03×10-6), whose impact on anticipatory processing, F (2,12) = 50.72 (p = 1.40×10-6) and on immediacy processing, F (2,12) = 57.76 (p = 6.95×10-7). In the coronal images, F-values (Fisher's ratio statistic) between 19.91-82.11 and p-values of 9.97×10-8 were noted. The sagittal comparisons are equally significant (F = 40.81-58.12), indicating that the structural differentiation by image enhancement and AI mapping is consistent in all orientations.

4. Discussion

The study investigates the impact of artificial intelligence (AI) for improving the magnetic resonance images (MRIs) at the voxel level for any brain phenotyping assessment. The study makes use of the analysis of structural contrast variation across multiplanar T2-weighted brain images. Using voxel-wise intensity transformation, the study shows the potential of artificial intelligence (AI)-based contrast enhancement for advancing neuroanatomical interpretation and capturing subtle phenotypic variability that often remains unnoticed in the raw grayscale (Chen et al., 2021; Lerch et al., 2017; Tufael et al., 2021).

The systematic enhancement of axial, coronal and sagittal planes used in this work showed that there was a noticeable sharpening of the anatomical boundaries along with a measurable change in the voxel intensity post-processing through AI. For instance, an enhancement revealed a clearly visible anterior–posterior gradient, which was only apparent within the sagittal view. According to (Aubert-Broche et al. (2013) and Giedd & Rapoport (2010), longitudinal contrast variation in structural MRI is developmentally significant. This finding is in accordance with these earlier studies (Aubert-Broche et al., 2013; Giedd & Rapoport, 2010). The processed images also made it possible to see the deep brain structures, especially the corpus callosum, brainstem, and medial temporal regions, more clearly, which are often key to neurological phenotyping (Saifullah et al., 2024; Thirion et al., 2021).

The voxel-wise one-way ANOVA results of the study show that original images and processed images are highly different from each other statistically. It can also be said that original and processed images are statistically verified from all planes. For example, in the axial view, F (2,12) =41.56, p<0.00001 and similar findings were replicated in the coronal and sagittal planes. Based on the above studies, it is confirmed that (Chen et al., 2021) Most previous work has looked at visual improvements but it has not statistically verified if improvements were morphologically meaningful (Bauer et al., 2025; Chen et al., 2021). Van Horn et al. (1998) show that improving visual contrast leads to diverging voxel-wise patterns (statistically verifiable) (Van Horn et al., 1998).

Notably, the suggested pipeline for processing which is augmented by AI does not simply rely on black-box modeling or deep learning segmentation. Instead, it uses explainable voxel-wise contrast transformations. As per Bacon et al. (2024) and Shaheema et al. (2025), this adds some interpretability to the process, an important issue for medical AI (Bacon et al., 2024; Shaheema et al., 2025). The findings demonstrate that voxel-level changes caused by intensity normalization and histogram equalization aid in revealing voxel-level anatomical structures and are able to generate anatomically faithful images, unlike generative AI or over-smoothed preprocessing models (Hoisak & Jaffray, 2011; Khalifa & Albadawy, 2024; Malczewski, 2025; Tufael et al., 2022).

Brain phenotyping is one of the most noteworthy results we obtained from this study. Identifying spatially ordered differences in intensity can help in delineating neuro-phenotypes based on the degree of intensities. The previous studies in the area mainly exploit longitudinal or multimodal datasets, but this work demonstrated that even within a single imaging modality (T2-weighted MRI), more advanced contrast-based transformations can provide meaningful inter-individual differences at the voxel level (Lee et al., 2023; Romano et al., 2014; Xu et al., 2023; Amin et al., 2025). The relationship observed in this study can be usable in predictive models of cognitive traits, neurodevelopmental status, or early pathology for which phenotype extraction from structural features is underdeveloped.

Evidence is fortified by the multiplanar assessment of contrast in the axial, coronal, and sagittal planes. As Abdellatif et al. (2016) and Bowness et al. (2013) note, analyses that focus on one plane may overlook variability across regions. The outcomes of our research remain constant and the application of AI enhancement effectively improves understanding of anatomy in all planes and indicates the universality of the proposed approach (Abdellatif et al., 2022; Bowness et al., 2024).

Moreover, the multiplanar consistency validates the robustness of the pipeline and extends its scope for generalized neuroimaging investigations. Statistical analyses conducted on voxel-level intensity distributions from original and processed images showed highly significant divergence not just in mean values, but also in standard deviation and spread, indicating a more sensitive response to intra-regional contrast variations. It was found in the Human Connectome Project that voxel-wise measures are more useful than gross regional averages (Aubert-Broche et al., 2013; Khalifa & Albadawy, 2024; Qian et al., 2010).  Our data backup that carefully adjusted contrast transformation enables recognition of microstructural attributes and asymmetries important for detailed phenotypic classification.

Moreover, the observed A-P gradient in the sagittal plane may have neurodevelopmental significance. Increased visibility of regional differences in brain maturation rates can serve as a potential biomarker for age mechanisms as indicated by developmental MRI studies (Marsh et al., 2008; Namburete et al., 2023; Amin et al., 2025). Our findings may also have implications for predictive analytics, which can reveal unusual development or neurodegeneration sooner.

Although several earlier works have applied an AI for segmentation or artifact removal (e.g. FreeSurfer, SPM), they do not reflect the interpretability of intensity-level contrast. Our method boosts contrast without losing the identity of voxels. This not only results in improved images but also enables one to validate it with a number that provides double confidence. According to Khadhraoui et al. (2024) and Saifullah et al. (2024), hybrid validation can boost interpretability, reproducibility, and quantifiability features in clinical imaging workflow for translational relevance (Khadhraoui et al., 2024; Saifullah et al., 2024). The contrast maps produced by AI technologies in the current study could be useful in future deep learning pipelines for brain phenotype classification, clustering of neurological diseases, or even brain age prediction (Khalifa & Albadawy, 2024; Klauschen et al., 2009; Tufael et al., 2023).

By enhancing voxel-wise anatomical variations using contrast transformation, the augmented images offer richer structural information than the unprocessed raw scan. We expect better convergence and interpretability by using such augmented inputs in end-to-end models with neural networks. Developing vector representations of plants and cropping systems can potentially be more informative for downstream machine learning tasks than traditional measured inputs.

To sum up, the researchers have proposed a validated framework starting from interpretable enhancement of image and ending with statistically-supported phenotyping insights. The combination of voxel-wise contrast mapping and multiplanar statistical validation can become the gold standard for preprocessing pipelines. The findings not only confirm past research but also go further by demonstrating that even small changes to images can uncover hidden structural facts about the brain if done with the help of an AI and validated statistically.

5. Limitation

Although the method used for this study is quite strong, the study does have some limitations. The data collection was entirely derived from a neurotypical population, potentially limiting its application to neurological and/or psychiatric disorders. Even though the voxel-wise stats give good evidence for anatomical differences based of contrast, a lack of expert-validated anatomy makes definitive accuracy verification impossible. Furthermore, the AI enhancement pipeline was developed for T2-weighted MRI data and has yet to be tested on similar data based on T1-weighted or diffusion. The study’s cross-sectional design does not offer any insight into longitudinal or developmental changes in brain morphology, which are essential to understand dynamic neuro-phenotypic trajectories.

 

6. Conclusion

AI-enhanced contrast mapping leads to improved voxel-wise visibility of anatomical features in structural brain MRI. The processed images showed a statistically significant difference in the intensity distribution among the axial, coronal and sagittal planes revealing neuro-phenotypic variation. Joining interpretable AI with statistical validation, the method not only boosts visual clarity, but also provides quantitative support for neuroimaging. The results open up the possibility of using contrast-based voxel analysis within generic neuroinformatics pipelines and will help with future attempts at characterizing diseases, brain-age modelling, and phenotype-based classification in clinical and developmental neurosciences.

Author Contributions

M.U.P. Conceptualization, methodology, supervision, data curation, manuscript writing original draft. T.H.T. Data preprocessing, validation, visualization, manuscript editing. S.A.M. MRI data acquisition support, preprocessing, statistical analysis. M.M.G. AI model integration, voxel-wise mapping, software implementation. E.A.S. Contrast enhancement pipeline development, quality control, figure preparation. A.D. ROI extraction, ANOVA analysis, result interpretation. R.K.S. Literature review, dataset organization, data interpretation. S.S.S. Image normalization workflow, documentation, manuscript proofreading. A.A.M. Statistical data checking, table generation, visualization support. J. General assistance, proofreading, technical support in preprocessing steps.

 

 

References


Abdellatif, H., Al Mushaiqri, M., Albalushi, H., Al-Zaabi, A. A., Roychoudhury, S., & Das, S. (2022). Teaching, Learning and Assessing Anatomy with Artificial Intelligence: The Road to a Better Future. International Journal of Environmental Research and Public Health, 19(21), 14209. https://doi.org/10.3390/ijerph192114209

Agripnidis, T., Ayobi, A., Quenet, S., Chaibi, Y., Avare, C., Jacquier, A., Girard, N., Hak, J.-F., Reyre, A., Brun, G., & El Ahmadi, A.-A. (2025). Performance of an artificial intelligence tool for multi-step acute stroke imaging: A multicenter diagnostic study. European Journal of Radiology Open, 15, 100678. https://doi.org/10.1016/j.ejro.2025.100678

Amin, M. S., Rahman, A., Rashid, M. J. (2025). "AI in Precision Oncology: Revolutionizing Early Diagnosis, Imaging and Targeted Therapies", Paradise, 1(1),1-10,10340. https://doi.org/10.25163/paradise.1110340

Amin, M. S., Rahman, A. (2025). "Integrative approaches of AI in personalised disease management: From diagnosis to drug delivery", Paradise, 1(1),1-10,10339. https://doi.org/10.25163/paradise.1110339

Aubert-Broche, B., Fonov, V. S., García-Lorenzo, D., Mouiha, A., Guizard, N., Coupé, P., Eskildsen, S. F., & Collins, D. L. (2013). A new method for structural volume analysis of longitudinal brain MRI data and its application in studying the growth trajectories of anatomical brain structures in childhood. NeuroImage, 82, 393–402. https://doi.org/10.1016/j.neuroimage.2013.05.065

Bacon, E. J., He, D., Achi, N. A. D., Wang, L., Li, H., Yao-Digba, P. D. Z., Monkam, P., & Qi, S. (2024). Neuroimage analysis using artificial intelligence approaches: a systematic review. Medical & Biological Engineering & Computing, 62(9), 2599–2627. https://doi.org/10.1007/s11517-024-03097-w

Bauer, E., Greiff, S., Graesser, A. C., Scheiter, K., & Sailer, M. (2025). Looking Beyond the Hype: Understanding the Effects of AI on Learning. Educational Psychology Review, 37(2), 45. https://doi.org/10.1007/s10648-025-10020-8

Bowness, J. S., Morse, R., Lewis, O., Lloyd, J., Burckett-St Laurent, D., Bellew, B., Macfarlane, A. J. R., Pawa, A., Taylor, A., Noble, J. A., & Higham, H. (2024). Variability between human experts and artificial intelligence in identification of anatomical structures by ultrasound in regional anaesthesia: a framework for evaluation of assistive artificial intelligence. British Journal of Anaesthesia, 132(5), 1063–1072. https://doi.org/10.1016/j.bja.2023.09.023

Cameron C. McKay, Brooke Scheinberg, Ellie P. Xu, Katharina Kircanski, Daniel S. Pine, Melissa A. Brotman, Ellen Leibenluft, and Julia O. Linke (2024). Emotion and Development Branch Phenotyping and DTI (2012-2017). OpenNeuro. [Dataset] doi:10.18112/openneuro.ds004605.v1.0.1  

Chen, X., Zhang, X., Xie, H., Tao, X., Wang, F. L., Xie, N., & Hao, T. (2021). A bibliometric and visual analysis of artificial intelligence technologies-enhanced brain MRI research. Multimedia Tools and Applications, 80(11), 17335–17363. https://doi.org/10.1007/s11042-020-09062-7

Giedd, J. N., & Rapoport, J. L. (2010). Structural MRI of Pediatric Brain Development: What Have We Learned and Where Are We Going? Neuron, 67(5), 728–734. https://doi.org/10.1016/j.neuron.2010.08.040

Hoisak, J. D. P., & Jaffray, D. A. (2011). A method for assessing voxel correspondence in longitudinal tumor imaginga). Medical Physics, 38(5), 2742–2753. https://doi.org/10.1118/1.3578600

Khadhraoui, E., Nickl-Jockschat, T., Henkes, H., Behme, D., & Müller, S. J. (2024). Automated brain segmentation and volumetry in dementia diagnostics: a narrative review with emphasis on FreeSurfer. Frontiers in Aging Neuroscience, 16. https://doi.org/10.3389/fnagi.2024.1459652

Khalifa, M., & Albadawy, M. (2024). AI in diagnostic imaging: Revolutionising accuracy and efficiency. Computer Methods and Programs in Biomedicine Update, 5, 100146. https://doi.org/10.1016/j.cmpbup.2024.100146

Kim, N.-H., Yang, B.-E., Kang, S.-H., Kim, Y.-H., Na, J.-Y., Kim, J.-E., & Byun, S.-H. (2024). Preclinical and Preliminary Evaluation of Perceived Image Quality of AI-Processed Low-Dose CBCT Analysis of a Single Tooth. Bioengineering, 11(6), 576. https://doi.org/10.3390/bioengineering11060576

Klauschen, F., Goldman, A., Barra, V., Meyer-Lindenberg, A., & Lundervold, A. (2009). Evaluation of automated brain MR image segmentation and volumetry methods. Human Brain Mapping, 30(4), 1310–1327. https://doi.org/10.1002/hbm.20599

Lee, K., Ji, J. L., Fonteneau, C., Berkovitch, L., Rahmati, M., Pan, L., Repovš, G., Krystal, J. H., Murray, J. D., & Anticevic, A. (2023). Human brain state dynamics reflect individual neuro-phenotypes. https://doi.org/10.1101/2023.09.18.557763

Lerch, J. P., van der Kouwe, A. J. W., Raznahan, A., Paus, T., Johansen-Berg, H., Miller, K. L., Smith, S. M., Fischl, B., & Sotiropoulos, S. N. (2017). Studying neuroanatomy using MRI. Nature Neuroscience, 20(3), 314–326. https://doi.org/10.1038/nn.4501

Liu, Z., Yao, B., Wen, J., Wang, M., Ren, Y., Chen, Y., Hu, Z., Li, Y., Liang, D., Liu, X., Zheng, H., Luo, D., & Zhang, N. (2023). Voxel-wise mapping of DCE-MRI time-intensity-curve profiles enables visualizing and quantifying hemodynamic heterogeneity in breast lesions. European Radiology, 34(1), 182–192. https://doi.org/10.1007/s00330-023-10102-7

Makris, N., Rushmore, R., Kaiser, J., Albaugh, M., Kubicki, M., Rathi, Y., Zhang, F., O’Donnell, L. J., Yeterian, E., Caviness, V. S., & Kennedy, D. N. (2023). A Proposed Human Structural Brain Connectivity Matrix in the Center for Morphometric Analysis Harvard-Oxford Atlas Framework: A Historical Perspective and Future Direction for Enhancing the Precision of Human Structural Connectivity with a Novel Neuroanatomical Typology. Developmental Neuroscience, 45(4), 161–180. https://doi.org/10.1159/000530358

Malczewski, K. (2025). Multimodal Sparse Reconstruction and Deep Generative Networks: A Paradigm Shift in MR-PET Neuroimaging. Applied Sciences, 15(15), 8744. https://doi.org/10.3390/app15158744

Marsh, R., Gerber, A. J., & Peterson, B. S. (2008). Neuroimaging Studies of Normal Brain Development and Their Relevance for Understanding Childhood Neuropsychiatric Disorders. Journal of the American Academy of Child & Adolescent Psychiatry, 47(11), 1233–1251. https://doi.org/10.1097/CHI.0b013e318185e703

Namburete, A. I. L., Papiez, B. W., Fernandes, M., Wyburd, M. K., Hesse, L. S., Moser, F. A., Ismail, L. C., Gunier, R. B., Squier, W., Ohuma, E. O., Carvalho, M., Jaffer, Y., Gravett, M., Wu, Q., Lambert, A., Winsey, A., Restrepo-Méndez, M. C., Bertino, E., Purwar, M., … Kennedy, S. H. (2023). Normative spatiotemporal fetal brain maturation with satisfactory development at 2 years. Nature, 623(7985), 106–114. https://doi.org/10.1038/s41586-023-06630-3

Neier, K., Grant, T. E., Palmer, R. L., Chappell, D., Hakam, S. M., Yasui, K. M., Rolston, M., Settles, M. L., Hunter, S. S., Madany, A., Ashwood, P., Durbin-Johnson, B., LaSalle, J. M., & Yasui, D. H. (2021). Sex disparate gut microbiome and metabolome perturbations precede disease progression in a mouse model of Rett syndrome. Communications Biology, 4(1), 1408. https://doi.org/10.1038/s42003-021-02915-3

Norbom, L. B., Rokicki, J., Alnæs, D., Kaufmann, T., Doan, N. T., Andreassen, O. A., Westlye, L. T., & Tamnes, C. K. (2020). Maturation of cortical microstructure and cognitive development in childhood and adolescence: A T1w/T2w ratio <scp>MRI</scp> study. Human Brain Mapping, 41(16), 4676–4690. https://doi.org/10.1002/hbm.25149

Pascucci, D., Tourbier, S., Rué-Queralt, J., Carboni, M., Hagmann, P., & Plomp, G. (2022). Source imaging of high-density visual evoked potentials with multi-scale brain parcellations and connectomes. Scientific Data, 9(1), 9. https://doi.org/10.1038/s41597-021-01116-1

Qian, G., Khoury, T. A., Peng, M. W., & Qian, Z. (2010). The performance implications of intra- and inter-regional geographic diversification. Strategic Management Journal, 31(9), 1018–1030. https://doi.org/10.1002/smj.855

Romano, D., Nicolau, M., Quintin, E., Mazaika, P. K., Lightbody, A. A., Cody Hazlett, H., Piven, J., Carlsson, G., & Reiss, A. L. (2014). Topological methods reveal high and low functioning neuro-phenotypes within fragile X syndrome. Human Brain Mapping, 35(9), 4904–4915. https://doi.org/10.1002/hbm.22521

Rushmore, R. J., Bouix, S., Kubicki, M., Rathi, Y., Rosene, D. L., Yeterian, E. H., & Makris, N. (2021). MRI-based Parcellation and Morphometry of the Individual Rhesus Monkey Brain: the macaque Harvard-Oxford Atlas (mHOA), a translational system referencing a standardized ontology. Brain Imaging and Behavior, 15(3), 1589–1621. https://doi.org/10.1007/s11682-020-00357-9

Saifullah, S., Pranolo, A., & Drezewski, R. (2024). Comparative analysis of image enhancement techniques for braintumor segmentation: contrast, histogram, and hybrid approaches. E3S Web of Conferences, 501, 01020. https://doi.org/10.1051/e3sconf/202450101020

Shaheema, B., Muppalaneni, N. B., & Devi, K. S. (2025). An explainable deep learning-based panoptic segmentation for brain tumor diagnosis. Neural Computing and Applications, 37(25), 20639–20662. https://doi.org/10.1007/s00521-025-11459-0

Singh, M., Verma, A., & Sharma, N. (2018). An Optimized Cascaded Stochastic Resonance for the Enhancement of Brain MRI. IRBM, 39(5), 334–342. https://doi.org/10.1016/j.irbm.2018.08.002

Sun, Z., & Ng, C. K. C. (2022). Finetuned Super-Resolution Generative Adversarial Network (Artificial Intelligence) Model for Calcium Deblooming in Coronary Computed Tomography Angiography. Journal of Personalized Medicine, 12(9), 1354. https://doi.org/10.3390/jpm12091354

Thirion, B., Thual, A., & Pinho, A. L. (2021). From deep brain phenotyping to functional atlasing. Current Opinion in Behavioral Sciences, 40, 201–212. https://doi.org/10.1016/j.cobeha.2021.05.004

Tufael, Atiqur Rahman Sunny et al. (2023). Artificial Intelligence in Addressing Cost, Efficiency, and Access Challenges in Healthcare, 4(1), 1-5, 9798.  https://doi.org/10.25163/primeasia.419798

Tufael and Atikur Rahmen Sunnay (2022). Transforming Healthcare with Artificial Intelligence: Innovations, Applications, and Future Challenges, Journal of Primeasia, 3(1), 1-6, 9802. https://doi.org/10.25163/primeasia.319802

Tufael and, Atikur Rahman Sunny (2021). Artificial Intelligence in Healthcare: A Review of Diagnostic Applications and Impact on Clinical Practice, Journal of Primeasia, 2(1), 1-5, 9816. https://doi.org/10.25163/primeasia.219816

Van Horn, J. D., Ellmore, T. M., Esposito, G., & Berman, K. F. (1998). Mapping Voxel-Based Statistical Power on Parametric Images. NeuroImage, 7(2), 97–107. https://doi.org/10.1006/nimg.1997.0317

Xu, M., Wang, X., Wang, L., Wang, S., Deng, J., Wang, Y., Li, Y., Pan, S., Liao, A., Tao, Y., & Tan, S. (2023). Effects of chronic sleep restriction on the neuro-phenotypes of Ctnnd2 knockout mice. Brain and Behavior, 13(7). https://doi.org/10.1002/brb3.3075

Zhang, S., Jiang, L., Hu, Z., Liu, W., Yu, H., Chu, Y., Wang, J., & Chen, Y. (2024). T1w/T2w ratio maps identify children with autism spectrum disorder and the relationships between myelin-related changes and symptoms. Progress in Neuro-Psychopharmacology and Biological Psychiatry, 134, 111040. https://doi.org/10.1016/j.pnpbp.2024.111040


View Dimensions


View Plumx


View Altmetric



0
Save
0
Citation
62
View
0
Share