** Also posted here on Valerie Strauss’ Answer Sheet in the Washington Post.
Two weeks ago, researchers from Mathematica dropped a bomb on the education policy community. It didn’t go off.
The report (prepared for the Institute for Education Sciences, a division of the USDOE) includes students in 36 charter schools throughout 15 states. The central conclusion: the vast majority of charter students does no better or worse than their regular public counterparts in math and reading scores (or on most of the other 35 outcomes examined). On the other hand, charter parents and students are more satisfied with their schools, and charters are more effective boosting scores of lower-income students.
The study, of course, is not without caveats (e.g., bias from limiting the sample to middle schools and “oversubscribed” charters only), and there was wide variation in charter performance. But the thoroughness and sophistication of the methods, the inclusion of charters in multiple locations across the nation, and especially the use random assignment from charter lotteries, make this analysis among the most definitive on the topic to date (see also Zimmer et al. 2009; Hoxby et al. 2009; Abdulkadiroglu et al. 2009; and CREDO 2009).
Nevertheless, given our inability to generalize one analysis to the entire charter population, as well as the polarized nature of the charter debate, it’s hardly surprising that this report has not settled much. What is surprising is that this study got far less attention and discussion than many dozens of reports of decidedly lower quality and importance. Actually, it got barely any coverage at all.
In part, perhaps this is because it came on the heels of another Mathematica report released the previous week, which (convincingly) shows positive effects of KIPP charter schools. For some, these two reports’ conflicting findings probably, and paradoxically, reinforced a single conclusion, an empirical impasse that goes something like this: “Some charter schools (like KIPP) work, and some don’t, so we have to replicate the schools that work and close the ones that don’t.”
This notion is misguided. “Charterness” is not a policy. If a particular policy or set of practices seems to help increase student performance, we should replicate those practices, not entire schools that adopted them. Doing the latter is sort of like buying a second home just to have a garage.
Accordingly, as many have pointed out, we need to use good research to identify these policies. And to its great credit, this study does take a rare look at how different charter school characteristics and operations are associated with performance (see Hoxby et al. 2009 and Berends et al. 2010 for other looks).
Here are the broad strokes on what they found: there is compelling evidence that schools with lower enrollment do better, along with positive, though weaker, effects from more per-pupil revenue, more school time (longer days and/or years), higher student-to-teacher ratios, and the use of ability grouping (assigning students at similar achievement levels to work together). The findings suggest a few practical implications.
First the “effects” of these interventions vary greatly by subject (virtually none is associated with reading performance). This speaks to the idea, which is not often discussed, that the changes we make will need to be “customized” for different subjects.
Second, most of the policies and factors receiving at least some support require greater investment – in short, more money. The very tentative evidence in this report supports investing in smaller schools (and not closing them), longer years/days, and perhaps different strategies for different subjects.
Third, almost all of the policies associated with higher performance have been in the mix for a long time, and none is particularly innovative. While its findings are hardly the final word, the analysis provides no supporting evidence for the factors it includes that are typically advanced by charter supporters, most notably autonomy, accountability, and operation by a private organization (the newer forms of “teacher quality” policies were not examined). In other words, it may be that truly effective “charterness” relies, perhaps to a large extent, on tools we already knew worked – providing struggling students with time, attention, and resources.
In this sense, a big component of charters’ legacy may be the simple gift of a frame of reference, of intra-district variation in practices and policies that regular public schools had not been providing. Regardless of your stance on charter schools, the opportunity they provide to unpack school effects is quite valuable (and it’s something regular public districts might do more themselves).
That’s part of what makes reports like this so important, and why, if nobody will break ranks in the charter debate, we should still use this research productively. Maybe that is already happening behind closed doors in some accountability-filled room. But there is a lot of uncertainty out here, and in supporting charters for charters’ sake, some of us are missing the causation forest for the correlation trees. So, let’s identify the specific policies and practices that are effective, and use them in all schools. We don’t need to destroy our public education system in order to save it.