A reproducible evaluation of ANTs similarity metric performance in brain image registration

The United States National Institutes of Health (NIH) commit significant support to open-source data and software resources in order to foment reproducibility in the biomedical imaging sciences. Here, we report and evaluate a recent product of this commitment: Advanced Neuroimaging Tools (ANTs), whi...

Full description

Saved in:
Bibliographic Details
Published inNeuroImage (Orlando, Fla.) Vol. 54; no. 3; pp. 2033 - 2044
Main Authors Avants, Brian B., Tustison, Nicholas J., Song, Gang, Cook, Philip A., Klein, Arno, Gee, James C.
Format Journal Article
LanguageEnglish
Published United States Elsevier Inc 01.02.2011
Elsevier Limited
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The United States National Institutes of Health (NIH) commit significant support to open-source data and software resources in order to foment reproducibility in the biomedical imaging sciences. Here, we report and evaluate a recent product of this commitment: Advanced Neuroimaging Tools (ANTs), which is approaching its 2.0 release. The ANTs open source software library consists of a suite of state-of-the-art image registration, segmentation and template building tools for quantitative morphometric analysis. In this work, we use ANTs to quantify, for the first time, the impact of similarity metrics on the affine and deformable components of a template-based normalization study. We detail the ANTs implementation of three similarity metrics: squared intensity difference, a new and faster cross-correlation, and voxel-wise mutual information. We then use two-fold cross-validation to compare their performance on openly available, manually labeled, T1-weighted MRI brain image data of 40 subjects (UCLA's LPBA40 dataset). We report evaluation results on cortical and whole brain labels for both the affine and deformable components of the registration. Results indicate that the best ANTs methods are competitive with existing brain extraction results (Jaccard=0.958) and cortical labeling approaches. Mutual information affine mapping combined with cross-correlation diffeomorphic mapping gave the best cortical labeling results (Jaccard=0.669±0.022). Furthermore, our two-fold cross-validation allows us to quantify the similarity of templates derived from different subgroups. Our open code, data and evaluation scripts set performance benchmark parameters for this state-of-the-art toolkit. This is the first study to use a consistent transformation framework to provide a reproducible evaluation of the isolated effect of the similarity metric on optimal template construction and brain labeling. ►A new, fast implementation of the cross-correlation that increases computational efficiency by a factor of 4 to 5 and allows larger correlation windows to be used for registration without excessive increase in computation time. ►Open-source implementation of the mutual information for symmetric diffeomorphic registration. ►A reproducible system for performance evaluation of the mean squares metric, cross-correlation metric and mutual information metric on optimal template-based brain extraction and regional brain labeling. The full evaluation system is documented in a bash script that is also released and available. The script is also being translated to python. ►Quantification of the similarity between optimal templates derived from different population subsets and with different similarity metrics.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Undefined-1
ObjectType-Feature-3
content type line 23
ObjectType-Article-1
ObjectType-Feature-2
ISSN:1053-8119
1095-9572
DOI:10.1016/j.neuroimage.2010.09.025