Automated Recognition of Regional Wall Motion Abnormalities Through Deep Neural Network Interpretation of Transthoracic Echocardiography
BACKGROUND:Automated interpretation of echocardiography by deep neural networks could support clinical reporting and improve efficiency. Whereas previous studies have evaluated spatial relationships using still frame images, we aimed to train and test a deep neural network for video analysis by comb...
Saved in:
Published in | Circulation (New York, N.Y.) Vol. 142; no. 16; pp. 1510 - 1520 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
by the American College of Cardiology Foundation and the American Heart Association, Inc
20.10.2020
|
Online Access | Get full text |
Cover
Loading…
Summary: | BACKGROUND:Automated interpretation of echocardiography by deep neural networks could support clinical reporting and improve efficiency. Whereas previous studies have evaluated spatial relationships using still frame images, we aimed to train and test a deep neural network for video analysis by combining spatial and temporal information, to automate the recognition of left ventricular regional wall motion abnormalities.
METHODS:We collected a series of transthoracic echocardiography examinations performed between July 2017 and April 2018 in 2 tertiary care hospitals. Regional wall abnormalities were defined by experienced physiologists and confirmed by trained cardiologists. First, we developed a 3-dimensional convolutional neural network model for view selection ensuring stringent image quality control. Second, a U-net model segmented images to annotate the location of each left ventricular wall. Third, a final 3-dimensional convolutional neural network model evaluated echocardiographic videos from 4 standard views, before and after segmentation, and calculated a wall motion abnormality confidence level (0–1) for each segment. To evaluate model stability, we performed 5-fold cross-validation and external validation.
RESULTS:In a series of 10 638 echocardiograms, our view selection model identified 6454 (61%) examinations with sufficient image quality in all standard views. In this training set, 2740 frames were annotated to develop the segmentation model, which achieved a Dice similarity coefficient of 0.756. External validation was performed in 1756 examinations from an independent hospital. A regional wall motion abnormality was observed in 8.9% and 4.9% in the training and external validation datasets, respectively. The final model recognized regional wall motion abnormalities in the cross-validation and external validation datasets with an area under the receiver operating characteristic curve of 0.912 (95% CI, 0.896–0.928) and 0.891 (95% CI, 0.834–0.948), respectively. In the external validation dataset, the sensitivity was 81.8% (95% CI, 73.8%–88.2%), and specificity was 81.6% (95% CI, 80.4%–82.8%).
CONCLUSIONS:In echocardiographic examinations of sufficient image quality, it is feasible for deep neural networks to automate the recognition of regional wall motion abnormalities using temporal and spatial information from moving images. Further investigation is required to optimize model performance and evaluate clinical applications. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 0009-7322 1524-4539 |
DOI: | 10.1161/CIRCULATIONAHA.120.047530 |