P1465Artificial intelligence in echocardiography - Steps to automatic cardiac measurements in routine practice

Abstract Introduction The growth of artificial intelligence (AI) use in echocardiography over the past years has been exponential, proposing new paths to overcome inter-operator variability and experience of the operator. Even though the applications of AI are still in their infancy within the field...

Full description

Saved in:
Bibliographic Details
Published inEuropean heart journal Vol. 40; no. Supplement_1
Main Authors Karuzas, A, Sablauskas, K, Skrodenis, L, Verikas, D, Rumbinaite, E, Zaliaduonyte-Peksiene, D, Ziuteliene, K, Vaskelyte, J J, Jurkevicius, R, Plisiene, J
Format Journal Article
LanguageEnglish
Published Oxford University Press 01.10.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Abstract Introduction The growth of artificial intelligence (AI) use in echocardiography over the past years has been exponential, proposing new paths to overcome inter-operator variability and experience of the operator. Even though the applications of AI are still in their infancy within the field of echocardiography, the potential of AI implies future directions and is eager to assist for accuracy and efficiency of manual tracings. Deep learning, a subset of machine learning algorithms, is gaining popularity in echocardiography as a state of the art in visual data analysis. Purpose To evaluate deep learning for two initial tasks in automated cardiac measurements: view recognition and end-systolic (ES) and end-diastolic (ED) frame detection. Methods A total of 230 patients' (with various indications for study) 2D echocardiography data was used to train and validate neural networks. Raw pixel data was extracted from EPIQ 7G, Vivid E95 and Vivid 7 imaging platforms. Images were labeled according to their view: parasternal long axis (PLA), basal short axis, short axis at mitral level, apical two, three and four chambers (A4C). Additionally, ES and ED frames were labeled for A4C and PLA views. Images were de-identified by applying black pixel masks to non-anatomical data and removing metadata. Convolutional Neural Network (CNN) architecture was used for the classification of 6 different views. A total of 34752 and 3972 (5792 and 662 per view) frames were used to train and validate the network, respectively. Long-term Recurrent Convolutional Network (LRCN) combining temporal and spatial cognition was used for ES and ED frame detection. A total of 195 and 35 sequences with a length of 92 frames were used to train the LRCN. Results CNN for view classification had an AUC of 0.95 (sensitivity 95%, specificity 97%). Accuracy was lower for visually similar views, namely apical three-chamber and apical two-chamber. Training for ES and ED detection was achieved when training LRCN for regression instead of classification of each frame. LRCN for cardiac cycle evaluation had an average Framed Difference (aFD) of 2.31 (SD±2.15) for ED and 1.97 (SD±2.04) frames for ES detection which corresponds to error rate of about 0.04 s. Conclusion Determining echocardiographic view and evaluating cardiac cycle are the first steps in automating cardiac measurements. We have demonstrated the potential of two deep learning algorithms in accomplishing these tasks. Initial results are promising for the development of neural networks for cardiac segmentation and measuring of anatomical structures.
ISSN:0195-668X
1522-9645
DOI:10.1093/eurheartj/ehz748.0230