Leveraging Rigorous Local Evaluations to Understand Contradictory Findings

Contradictory findings from "well-implemented" rigorous evaluations invite researchers to identify the differences that might explain the contradictions, helping to generate testable hypotheses for new research. This panel will examine efforts to ensure that the large number of local evalu...

Full description

Saved in:
Bibliographic Details
Published inSociety for Research on Educational Effectiveness
Main Authors Boulay, Beth, Martin, Carlos, Zief, Susan, Granger, Robert
Format Report
LanguageEnglish
Published Society for Research on Educational Effectiveness 2013
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Contradictory findings from "well-implemented" rigorous evaluations invite researchers to identify the differences that might explain the contradictions, helping to generate testable hypotheses for new research. This panel will examine efforts to ensure that the large number of local evaluations being conducted as part of four federally-funded grant programs generate rigorous findings that can inform understanding of contradictory findings. The panel will focus on the" Investing in Innovation Program" (i3) and "Striving Readers Program" (both funded by the Department of Education), the "Workforce Innovation Fund "(Department of Labor), and the "Teen Pregnancy Prevention Program" (Department of Health and Human Services). These programs have clearly made it a priority to protect the federal investment in rigorous, local evaluations; they have each put contracts in place with nationally recognized research firms to provide technical assistance to support the local evaluators as they conduct rigorous research. The populations, participants, and subjects also come from a range of contexts, given the breadth of the evaluations. The findings from the panel report include: (1) variety across the evaluations, giving rise to interesting contradictions that provide opportunities for learning; and (2) discussions that will generate conclusions about how the panel should think about grouping evaluations together to identify contradictions, the possible types of hypotheses generated to explain these contradictions, and what data the panel might want to be able to test those hypotheses down the road.