Multiple-cutoff Regression Discontinuity Designs in Program Evaluation: A Comparison of Two Estimation Methods
Loading...
Date
2019-01-11
Authors
Yoon, HyeonJin
Journal Title
Journal ISSN
Volume Title
Publisher
University of Oregon
Abstract
In basic regression discontinuity (RD) designs, causal inference is limited to the local area near a single cutoff. To strengthen the generality of the RD treatment estimate, a design with multiple cutoffs along the assignment variable continuum can be applied. The availability of multiple cutoffs allows estimation of a pooled average treatment effect across cutoffs and/or individual estimates at each cutoff location, allowing for the possibility of heterogeneous treatment effects. The purpose of this study is to (a) demonstrate the application of two treatment effect estimation methods (i.e., a conventional pooling method and a multilevel pooling method) for the multiple-cutoff RD (MCRD) designs using Tier 2 kindergarten math intervention data (ROOTS), (b) examine the extent to which the two methods yield unbiased and precise estimates comparable to those from the randomized controlled trial (RCT) design, and (c) investigate the moderating role of a classroom characteristic (i.e., classroom cut-point) on the size of the ROOTS intervention effect.
Math intervention data were collected from 2012 to 2015 to evaluate the impact of a small-group (Tier 2) kindergarten mathematics intervention. The analytic sample included 1,900 kindergarten students from the four school districts in Oregon and from the two districts in Boston, Massachusetts. The intervention effect was estimated using a conventional pooling method and a multilevel pooling method. The bias and power of the resulting MCRD estimates were compared with an RCT benchmark. In addition, treatment effect variability was predicted by the cut-point used to screen treated students in each classroom.
Results showed that treatment students scored higher on the posttest outcome than control students at the centered cutoff. All of the MCRD methods produced unbiased treatment effect estimates comparable to a benchmark RCT estimate; however, the power in the MCRD design was lower than in the RCT, regardless of the estimation method. The cut-point used to screen students into the treatment condition moderated the treatment effect, with a greater treatment effect observed in the classrooms with a larger cutoff value. Implications for program evaluation design theory and practice are discussed.