A Comparison of CT Perfusion Output of RapidAI and Viz.ai Software in the Evaluation of Acute Ischemic Stroke

Automated CTP postprocessing packages have been developed for managing acute ischemic stroke. These packages use image processing techniques to identify the ischemic core and penumbra. This study aimed to investigate the agreement of decision-making rules and output derived from RapidAI and Viz.ai s...

Full description

Saved in:
Bibliographic Details
Published inAmerican journal of neuroradiology : AJNR Vol. 45; no. 7; pp. 863 - 870
Main Authors Bushnaq, Saif, Hassan, Ameer E, Delora, Adam, Kerro, Ali, Datta, Anita, Ezzeldin, Rime, Ali, Zuhair, Anwoju, Tunmi, Nejad, Layla, Silva, Rene, Abualnadi, Yazan Diya, Khalil, Zorain Mustafa, Ezzeldin, Mohamad
Format Journal Article
LanguageEnglish
Published United States 08.07.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Automated CTP postprocessing packages have been developed for managing acute ischemic stroke. These packages use image processing techniques to identify the ischemic core and penumbra. This study aimed to investigate the agreement of decision-making rules and output derived from RapidAI and Viz.ai software packages in early and late time windows and to identify predictors of inadequate quality CTP studies. One hundred twenty-nine patients with acute ischemic stroke who had CTP performed on presentation were analyzed by RapidAI and Viz.ai. Volumetric outputs were compared between packages by performing Spearman rank-order correlation and Wilcoxon signed-rank tests with subanalysis performed at early (<6 hours) and extended (>6 hours) time windows. The concordance of selecting patients on the basis of DAWN and DEFUSE 3 eligibility criteria was assessed using the McNemar test. One hundred eight of 129 patients were found to have adequate-quality studies. Spearman rank-order correlation coefficients were calculated on time-to-maximum >6-second volume, time-to-maximum >10-second volume, CBF <30% volume, mismatch volume, and mismatch ratio between both software packages with correlation coefficients of 0.82, 0.65, 0.77, 0.78, 0.59, respectively. The Wilcoxon signed-rank test was also performed on time-to-maximum >6-second volume, time-to-maximum >10-second volume, CBF <30% volume, mismatch volume, and mismatch ratio with values of .30, .016, <.001, .03, <.001, respectively. In a 1-sided test, CBF <30% was greater in Viz.ai ( < .001). Although this finding resulted in statistically significant differences, it did not cause clinically significant differences when applied to the DAWN and DEFUSE 3 criteria. A lower ejection fraction predicted an inadequate study in both software packages ( = .018; 95% CI, 0.01-0.113) and ( = .024; 95% CI, 0.008-0.109) for RapidAI and Viz.ai, respectively. Penumbra and infarct core predictions between Rapid and Viz.ai correlated but were statistically different and resulted in equivalent triage using DAWN and DEFUSE3 criteria. Viz.ai predicted higher ischemic core volumes than RapidAI. Viz.ai predicted lower combined core and penumbra values than RapidAI at lower volumes and higher estimates than RapidAI at higher volumes. Clinicians should be cautious when using different software packages for clinical decision-making.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0195-6108
1936-959X
1936-959X
DOI:10.3174/ajnr.a8196