Multi-subject search correctly identifies causal connections and most causal directions in the DCM models of the Smith et al. simulation study

Smith et al. report a large study of the accuracy of 38 search procedures for recovering effective connections in simulations of DCM models under 28 different conditions. Their results are disappointing: no method reliably finds and directs connections without large false negatives, large false posi...

Full description

Saved in:
Bibliographic Details
Published inNeuroImage (Orlando, Fla.) Vol. 58; no. 3; pp. 838 - 848
Main Authors Ramsey, Joseph D., Hanson, Stephen José, Glymour, Clark
Format Journal Article
LanguageEnglish
Published United States Elsevier Inc 01.10.2011
Elsevier Limited
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Smith et al. report a large study of the accuracy of 38 search procedures for recovering effective connections in simulations of DCM models under 28 different conditions. Their results are disappointing: no method reliably finds and directs connections without large false negatives, large false positives, or both. Using multiple subject inputs, we apply a previously published search algorithm, IMaGES, and novel orientation algorithms, LOFS, in tandem to all of the simulations of DCM models described by Smith et al. (2011). We find that the procedures accurately identify effective connections in almost all of the conditions that Smith et al. simulated and, in most conditions, direct causal connections with precision greater than 90% and recall greater than 80%. ► Smith et al. (2011) compared 38 single-subject search methods on DCM simulated data. ► No method did better than chance at finding both connections and directions. ► We use IMaGES for adjacencies and LOFS for non-Gaussian orientation. ► Connection precision is near 100%, and direction precision 90%, in most simulations. ► Advantage is due not merely to increased sample size.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ObjectType-Article-2
ObjectType-Feature-1
ISSN:1053-8119
1095-9572
DOI:10.1016/j.neuroimage.2011.06.068