Appearance-based modeling for segmentation of hippocampus and amygdala using multi-contrast MR imaging

A new automatic model-based segmentation scheme that combines level set shape modeling and active appearance modeling (AAM) is presented. Since different MR image contrasts can yield complementary information, multi-contrast images can be incorporated into the active appearance modeling to improve s...

Full description

Saved in:
Bibliographic Details
Published inNeuroImage (Orlando, Fla.) Vol. 58; no. 2; pp. 549 - 559
Main Authors Hu, Shiyan, Coupé, Pierrick, Pruessner, Jens C., Collins, D. Louis
Format Journal Article
LanguageEnglish
Published United States Elsevier Inc 15.09.2011
Elsevier Limited
Elsevier
Subjects
Online AccessGet full text
ISSN1053-8119
1095-9572
1095-9572
DOI10.1016/j.neuroimage.2011.06.054

Cover

Loading…
More Information
Summary:A new automatic model-based segmentation scheme that combines level set shape modeling and active appearance modeling (AAM) is presented. Since different MR image contrasts can yield complementary information, multi-contrast images can be incorporated into the active appearance modeling to improve segmentation performance. During active appearance modeling, the weighting of each contrast is optimized to account for the potentially varying contribution of each image while optimizing the model parameters that correspond to the shape and appearance eigen-images in order to minimize the difference between the multi-contrast test images and the ones synthesized from the shape and appearance modeling. As appearance-based modeling techniques are dependent on the initial alignment of training data, we compare (i) linear alignment of whole brain, (ii) linear alignment of a local volume of interest and (iii) non-linear alignment of a local volume of interest. The proposed segmentation scheme can be used to segment human hippocampi (HC) and amygdalae (AG), which have weak intensity contrast with their background in MRI. The experiments demonstrate that non-linear alignment of training data yields the best results and that multimodal segmentation using T1-weighted, T2-weighted and proton density-weighted images yields better segmentation results than any single contrast. In a four-fold cross validation with eighty young normal subjects, the method yields a mean Dice к of 0.87 with intraclass correlation coefficient (ICC) of 0.946 for HC and a mean Dice к of 0.81 with ICC of 0.924 for AG between manual and automatic labels. ► Combining level-set shape modeling and appearance modeling to constrain shape variation in a procedure that can take advantage of intensity information of MR data to achieve automatic segmentation. ► Incorporating multi-contrast images, i.e. T1, T2, and PD MR images, into the segmentation procedure. ► Optimizing the contribution of each modality image during the segmentation. ► Exploring the effect of the quality of registration methods on the segmentation accuracy.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ObjectType-Article-2
ObjectType-Feature-1
ISSN:1053-8119
1095-9572
1095-9572
DOI:10.1016/j.neuroimage.2011.06.054