3D segmentation of the tongue in MRI: a minimally interactive model-based approach

Static magnetic resonance imaging partially resolves soft tissue details of the oropharynx, which are crucial in swallowing and speech studies. However, delineation of tongue tissue remains a challenge due to the lack of definitive boundary features. In this article, we propose a minimally interacti...

Full description

Saved in:
Bibliographic Details
Published inComputer methods in biomechanics and biomedical engineering. Vol. 3; no. 4; pp. 178 - 188
Main Authors M. Harandi, Negar, Abugharbieh, Rafeef, Fels, Sidney
Format Journal Article
LanguageEnglish
Published Taylor & Francis 02.10.2015
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Static magnetic resonance imaging partially resolves soft tissue details of the oropharynx, which are crucial in swallowing and speech studies. However, delineation of tongue tissue remains a challenge due to the lack of definitive boundary features. In this article, we propose a minimally interactive inter-subject mesh-to-image registration scheme to tackle 3D segmentation of the human tongue from MRI volumes. A tongue surface-mesh is first initialised using an exemplar expert-delineated template, which is then refined based on local intensity similarities between the source and target volumes. A shape-matching technique [Gilles B, Pai D. 2008. Fast musculoskeletal registration based on shape matching. Paper presented at: MICCAI 2008. Proceedings of the 11th International Conference on Medical Image Computing and Computer Assisted Intervention; New York, NY, USA] is applied for regularising the deformation. We enable effective minimal user interaction by incorporating additional boundary labels in areas where automatic segmentation is deemed inadequate. We validate our method on 18 normal subjects using expert manual delineation as the ground truth. Results indicate an average segmentation accuracy of overlap of 90.4 ± 0.4% and distance of 2 ± 0.2 mm, achieved within an expert interaction time of 2 ± 1 min per volume.
ISSN:2168-1163
2168-1171
DOI:10.1080/21681163.2013.864958