Accurate Lungs Segmentation on CT Chest Images by Adaptive Appearance-Guided Shape Modeling

To accurately segment pathological and healthy lungs for reliable computer-aided disease diagnostics, a stack of chest CT scans is modeled as a sample of a spatially inhomogeneous joint 3D Markov-Gibbs random field (MGRF) of voxel-wise lung and chest CT image signals (intensities). The proposed lear...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on medical imaging Vol. 36; no. 1; pp. 263 - 276
Main Authors Soliman, Ahmed, Khalifa, Fahmi, Elnakib, Ahmed, Abou El-Ghar, Mohamed, Dunlap, Neal, Wang, Brian, Gimel'farb, Georgy, Keynton, Robert, El-Baz, Ayman
Format Journal Article
LanguageEnglish
Published United States IEEE 01.01.2017
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:To accurately segment pathological and healthy lungs for reliable computer-aided disease diagnostics, a stack of chest CT scans is modeled as a sample of a spatially inhomogeneous joint 3D Markov-Gibbs random field (MGRF) of voxel-wise lung and chest CT image signals (intensities). The proposed learnable MGRF integrates two visual appearance sub-models with an adaptive lung shape submodel. The first-order appearance submodel accounts for both the original CT image and its Gaussian scale space (GSS) filtered version to specify local and global signal properties, respectively. Each empirical marginal probability distribution of signals is closely approximated with a linear combination of discrete Gaussians (LCDG), containing two positive dominant and multiple sign-alternate subordinate DGs. The approximation is separated into two LCDGs to describe individually the lungs and their background, i.e., all other chest tissues. The second-order appearance submodel quantifies conditional pairwise intensity dependencies in the nearest voxel 26-neighborhood in both the original and GSS-filtered images. The shape submodel is built for a set of training data and is adapted during segmentation using both the lung and chest appearances. The accuracy of the proposed segmentation framework is quantitatively assessed using two public databases (ISBI VESSEL12 challenge and MICCAI LOLA11 challenge) and our own database with, respectively, 20, 55, and 30 CT images of various lung pathologies acquired with different scanners and protocols. Quantitative assessment of our framework in terms of Dice similarity coefficients, 95-percentile bidirectional Hausdorff distances, and percentage volume differences confirms the high accuracy of our model on both our database (98.4±1.0%, 2.2±1.0 mm, 0.42±0.10%) and the VESSEL12 database (99.0±0.5%, 2.1±1.6 mm, 0.39±0.20%), respectively. Similarly, the accuracy of our approach is further verified via a blind evaluation by the organizers of the LOLA11 competition, where an average overlap of 98.0% with the expert's segmentation is yielded on all 55 subjects with our framework being ranked first among all the state-of-the-art techniques compared.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0278-0062
1558-254X
DOI:10.1109/TMI.2016.2606370