Automated analysis of the American College of Radiology mammographic accreditation phantom images

A significant metric in federal mammography quality standards is the phantom image quality assessment. The present work seeks to demonstrate that automated image analyses for American College of Radiology (ACR) mammographic accreditation phantom (MAP) images may be performed by a computer with objec...

Full description

Saved in:
Bibliographic Details
Published inMedical physics (Lancaster) Vol. 24; no. 5; p. 709
Main Authors Brooks, K W, Trueblood, J H, Kearfott, K J, Lawton, D T
Format Journal Article
LanguageEnglish
Published United States 01.05.1997
Subjects
Online AccessGet more information

Cover

Loading…
More Information
Summary:A significant metric in federal mammography quality standards is the phantom image quality assessment. The present work seeks to demonstrate that automated image analyses for American College of Radiology (ACR) mammographic accreditation phantom (MAP) images may be performed by a computer with objectivity, once a human acceptance level has been established. Twelve MAP images were generated with different x-ray techniques and digitized. Nineteen medical physicists in diagnostic roles (five of which were specially trained in mammography) viewed the original film images under similar conditions and provided individual scores for each test object (fibrils, microcalcifications, and nodules). Fourier domain template matching, used for low-level processing, combined with derivative filters, for intermediate-level processing, provided translation and rotation-independent localization of the test objects in the MAP images. The visibility classification decision was modeled by a Bayesian classifer using threshold contrast. The 50% visibility contrast threshold established by the trained observers' responses were: fibrils 1.010, microcalcifications 1.156, and nodules 1.016. Using these values as an estimate of human observer performance and given the automated localization of test objects, six images were graded with the computer algorithm. In all but one instance, the algorithm scored the images the same as the diagnostic physicists. In the case where it did not, the margin of disagreement was 10% due to the fact that the human scoring did not allow for half-visible fibrils (agreement occurred for the other test objects). The implication from this is that an operator-independent, machine-based scoring of MAP images is feasible and could be used as a tool to help eliminate the effect of observer variability within the current system, given proper, consistent digitization is performed.
ISSN:0094-2405
DOI:10.1118/1.597992