When Can We Conclude That Treatments or Programs "Don't Work"?

In this article, the authors examine common practices of reporting statistically nonsignificant findings in criminal justice evaluation studies. They find that criminal justice evaluators often make formal errors in the reporting of statistically nonsignificant results. Instead of simply concluding...

Full description

Saved in:
Bibliographic Details
Published inThe Annals of the American Academy of Political and Social Science Vol. 587; no. 1; pp. 31 - 48
Main Authors Weisburd, David, Lum, Cynthia M., Yang, Sue-Ming
Format Journal Article
LanguageEnglish
Published Thousand Oaks, CA Sage Publications 01.05.2003
SAGE PUBLICATIONS, INC
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this article, the authors examine common practices of reporting statistically nonsignificant findings in criminal justice evaluation studies. They find that criminal justice evaluators often make formal errors in the reporting of statistically nonsignificant results. Instead of simply concluding that the results were not statistically significant, or that there is not enough evidence to support an effect of treatment, they often mistakenly accept the null hypothesis and state that the intervention had no impact or did not work. The authors propose that researchers define a second null hypothesis that sets a minimal threshold for program effectiveness. In an illustration of this approach, they find that more than half of the studies that had no statistically significant finding for a traditional, no difference null hypothesis evidenced a statistically significant result in the case of a minimal worthwhile treatment effect null hypothesis.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ObjectType-Article-1
ObjectType-Feature-2
ISSN:0002-7162
1552-3349
DOI:10.1177/0002716202250782