A recent Mathematica report on the performance of KIPP charter schools expands and elaborates on their prior analyses of these schools’ (estimated) effects on average test scores and other outcomes (also here). These findings are important and interesting, and were covered extensively elsewhere.
As is usually the case with KIPP, the results stirred the full spectrum of reactions. To over-generalize a bit, critics sometimes seem unwilling to acknowledge that KIPP’s results are real no matter how well-documented they might be, whereas some proponents are quick to use KIPP to proclaim a triumph for the charter movement, one that can justify the expansion of charter sectors nationwide.
Despite all this controversy, there may be more opportunity for agreement here than meets the eye. So, let’s try to lay out a few reasonable conclusions and see if we might find some of that common ground. Read More »
A recent article in Reuters, one that received a great deal of attention, sheds light on practices that some charter schools are using essentially to screen students who apply for admission. These policies include requiring long and difficult applications, family interviews, parental contracts, and even demonstrations of past academic performance.
It remains unclear how common these practices might be in the grand scheme of things, but regardless of how frequently they occur, most of these tactics are terrible, perhaps even illegal, and should be stopped. At the same time, there are two side points to keep in mind when you hear about charges such as these, as well as the accusations (and denials) of charter exclusion and segregation that tend to follow.
The first is that some degree of (self-)sorting and segregation of students by abilities, interests and other characteristics is part of the deal in a choice-based system. The second point is that screening and segregation are most certainly not unique to charter/private schools, and one primary reason is that there is, in a sense, already a lot of choice among regular public schools. Read More »
Among the more persistent arguments one hears in the debate over charter schools is that the “best evidence” shows charters are more effective. I have discussed this issue before (as have others), but it seems to come up from time to time, even in mainstream media coverage.
The basic point is that we should essentially dismiss – or at least regard with extreme skepticism – the two dozen or so high-quality “non-experimental” studies, which, on the whole, show modest or no differences in test-based effectiveness between charters and comparable regular public schools. In contrast, “randomized controlled trials” (RCTs), which exploit the random assignment of admission lotteries to control for differences between students, tend to yield positive results. Since, so the story goes, the “gold standard” research shows that charters are superior, we should go with that conclusion.
RCTs, though not without their own limitations, are without question powerful, and there is plenty of subpar charter research out there. That said, however, the “best evidence” argument is not particularly compelling (and it’s also a distraction from the positive shift away from obsessing about whether charters do or don’t work toward an examination of why). A full discussion of the methodological issues in the charter school literature would be long and burdensome, but it might be helpful to lay out three very basic points to bear in mind when you hear this argument. Read More »
** Reprinted here in the Washington Post
2012 was another busy year for market-based education reform. The rapid proliferation of charter schools continued, while states and districts went about the hard work of designing and implementing new teacher evaluations that incorporate student testing data, and, in many cases, performance pay programs to go along with them.
As in previous years (see our 2010 and 2011 reviews), much of the research on these three “core areas” – merit pay, charter schools, and the use of value-added and other growth models in teacher evaluations – appeared rather responsive to the direction of policy making, but could not always keep up with its breakneck pace.*
Some lag time is inevitable, not only because good research takes time, but also because there’s a degree to which you have to try things before you can see how they work. Nevertheless, what we don’t know about these policies far exceeds what we know, and, given the sheer scope and rapid pace of reforms over the past few years, one cannot help but get the occasional “flying blind” feeling. Moreover, as is often the case, the only unsupportable position is certainty. Read More »
** Reprinted here in the Washington Post
Charter school “caps” are state-imposed limits on the size or growth of charter sectors. Currently, around 25 states set caps on schools or enrollment, with wide variation in terms of specifics: Some states simply set a cap on the number of schools (or charters in force); others limit annual growth; and still others specify caps on both growth and size (there are also a few places that cap proportional spending, coverage by individual operators and other dimensions).
A great many charter school supporters strongly support the lifting of these restrictions, arguing that they prevent the opening of high-quality schools. This is, of course, an oversimplification at best, as lifting caps could just as easily lead to the proliferation of the many unsuccessful charters. If the charter school experiment has taught us anything, it’s that these schools are anything but sure bets, and that even includes the tiny handful of highly successful models such as KIPP.*
Overall, the only direct impact of charter caps is to limit the potential size or growth of a state’s charter school sector. Assessing their implications for quality, on the other hand, is complicated, and there is every reason to believe that the impact of caps, and thus the basis of arguments for lifting them, varies by context – including the size and quality of states’ current sectors, as well as the criteria by which low-performing charters are closed and new ones are authorized. Read More »
The issue of student attrition at KIPP and charter schools is never far beneath the surface of our education debates. KIPP’s critics claim that these schools exclude or “counsel out” students who aren’t doing well, thus inflating student test results. Supporters contend that KIPP schools are open admission with enrollment typically determined by lottery, and they usually cite a 2010 Mathematica report finding strong results among students in most (but not all) of 22 KIPP middle schools, as well as attrition rates that were no higher, on average, than at the regular public schools to which they are compared.*
As I have written elsewhere, I am persuaded that student attrition cannot explain away the gains that Mathematica found in the schools they examined (though I do think peer effects of attrition without replacement may play some role, which is a very common issue in research of this type).
But, beyond this back-and-forth over the churn in these schools and whether it affected the results of this analysis, there’s also a confusion of sorts when it comes to discussions of student attrition in charters, whether KIPP or in general. Supporters of school choice often respond to “attrition accusations” by trying to deny or downplay its importance or frequency. This, it seems to me, ignores an obvious point: Within-district attrition – students changing schools, often based on “fit” or performance - is a defining feature of school choice, not an aberration. Read More »
There have now been several stories in the New York news media about New York City’s charter schools’ “gains” on this year’s state tests (see here, here, here, here and here). All of them trumpeted the 3-7 percentage point increase in proficiency among the city’s charter students, compared with the 2-3 point increase among their counterparts in regular public schools. The consensus: Charters performed fantastically well this year.
In fact, the NY Daily News asserted that the “clear lesson” from the data is that “public school administrators must gain the flexibility enjoyed by charter leaders,” and “adopt [their] single-minded focus on achievement.” For his part, Mayor Michael Bloomberg claimed that the scores are evidence that the city should expand its charter sector.
All of this reflects a fundamental misunderstanding of how to interpret testing data, one that is frankly a little frightening to find among experienced reporters and elected officials.
Read More »
A recent Economist article on charter schools, though slightly more nuanced than most mainstream media treatments of the charter evidence, contains a very common, somewhat misleading argument that I’d like to address quickly. It’s about the findings of the so-called “CREDO study,” the important (albeit over-cited) 2009 national comparison of student achievement in charter and regular public schools in 16 states.
Specifically, the article asserts that the CREDO analysis, which finds a statistically discernible but very small negative impact of charters overall (with wide underlying variation), also finds a significant positive effect among low-income students. This leads the Economist to conclude that the entire CREDO study “has been misinterpreted,” because it’s real value is in showing that “the children who most need charters have been served well.”
Whether or not an intervention affects outcomes among subgroups of students is obviously important (though one has hardly “misinterpreted” a study by focusing on its overall results). And CREDO does indeed find a statistically significant, positive test-based impact of charters on low-income students, vis-à-vis their counterparts in regular public schools. However, as discussed here (and in countless textbooks and methods courses), statistical significance only means we can be confident that the difference is non-zero (it cannot be chalked up to random fluctuation). Significant differences are often not large enough to be practically meaningful.
And this is certainly the case with CREDO and low-income students. Read More »
The so-called “parent trigger,” the policy by which a majority of a school’s parents can decide to convert it to a charter school, seems to be getting a lot of attention lately.
Advocates describe the trigger as “parent empowerment,” a means by which parents of students stuck in “failing schools” can take direct action to improve the lives of their kids. Opponents, on the other hand, see it as antithetical to the principle of schools as a public good – parents don’t own schools, the public does. And important decisions such as charter conversion, which will have a lasting impact on the community as a whole (including parents of future students), should not be made by a subgroup of voters.
These are both potentially appealing arguments. In many cases, however, attitudes toward the parent trigger seem more than a little dependent upon attitudes toward charter schools in general. If you strongly support charters, you’ll tend to be pro-trigger, since there’s nothing to lose and everything to gain. If you oppose charter schools, on the other hand, the opposite is likely to be the case. There’s a degree to which it’s not the trigger itself but rather what’s being triggered – opening more charter schools – that’s driving the debate. Read More »
A new report from the U.S. Government Accountability Office (GAO) provides one of the first large-scale comparisons of special education enrollment between charter and regular public schools. The report’s primary finding, which, predictably, received a fair amount of attention, is that roughly 11 percent of students enrolled in regular public schools were on special education plans in 2009-10, compared with just 8 percent of charter school students.
The GAO report’s authors are very careful to note that their findings merely describe what you might call the “service gap” – i.e., the proportion of special education students served by charters versus regular public schools – but that they do not indicate the reasons for this disparity.
This is an important point, but I would take the warning a step further: The national- and state-level gaps themselves should be interpreted with the most extreme caution. Read More »
There’s a fairly large body of research showing that charter schools vary widely in test-based performance relative to regular public schools, both by location as well as subgroup. Yet, you’ll often hear people point out that the highest-quality evidence suggests otherwise (see here, here and here) – i.e., that there are a handful of studies using experimental methods (randomized controlled trials, or RCTs) and these analyses generally find stronger, more uniform positive charter impacts.
Sometimes, this argument is used to imply that the evidence, as a whole, clearly favors charters, and, perhaps by extension, that many of the rigorous non-experimental charter studies – those using sophisticated techniques to control for differences between students – would lead to different conclusions were they RCTs.*
Though these latter assertions are based on a valid point about the power of experimental studies (the few of which we have are often ignored in the debate over charters), they are dubiously overstated for a couple of reasons, discussed below. But a new report from the (indispensable) organization Mathematica addresses the issue head on, by directly comparing estimates of charter school effects that come from an experimental analysis with those from non-experimental analyses of the same group of schools.
The researchers find that there are differences in the results, but many are not statistically significant and those that are don’t usually alter the conclusions. This is an important (and somewhat rare) study, one that does not, of course, settle the issue, but does provide some additional tentative support for the use of strong non-experimental charter research in policy decisions.
Read More »
Do charter schools do more – get better results – with less? If you ask this question, you’ll probably get very strong answers, ranging from the affirmative to the negative, often depending on the person’s overall view of charter schools. The reality, however, is that we really don’t know.
Actually, despite uninformed coverage of insufficient evidence, researchers don’t even have a good handle on how much charter schools spend, to say nothing of whether how and how much they spend leads to better outcomes. Reporting of charter financial data is incomplete, imprecise and inconsistent. It is difficult to disentangle the financial relationships between charter management organizations (CMOs) and the schools they run, as well as that between charter schools and their “host” districts.
A new report published by the National Education Policy Center, with support from the Shanker Institute and the Great Lakes Center for Education Research and Practice, examines spending between 2008 and 2010 among charter schools run by major CMOs in three states – New York, Texas and Ohio. The results suggest that relative charter spending in these states, like test-based charter performance overall, varies widely. In addition, perhaps more importantly, the findings make it clear that there remain significant barriers to accurate spending comparisons between charter and regular public schools, which severely hinder rigorous efforts to examine the cost-effectiveness of these schools. Read More »
Charter schools in New Orleans (NOLA) now serve over four out of five students in the city – the largest market share of any big city in the nation. As of the 2011-12 school year, most of the city’s schools (around 80 percent), charter and regular public, are overseen by the Recovery School District (RSD), a statewide agency created in 2003 to take over low-performing schools, which assumed control of most NOLA schools in Katrina’s aftermath.
Around three-quarters of these RSD schools (50 out of 66) are charters. The remainder of NOLA’s schools are overseen either by the Orleans Parish School Board (which is responsible for 11 charters and six regular public schools, and taxing authority for all parish schools) or by the Louisiana Board of Elementary and Secondary Education (which is directly responsible for three charters, and also supervises the RSD).
New Orleans is often held up as a model for the rapid expansion of charter schools in other urban districts, based on the argument that charter proliferation since 2005-06 has generated rapid improvements in student outcomes. There are two separate claims potentially embedded in this argument. The first is that the city’s schools perform better that they did pre-Katrina. The second is that NOLA’s charters have outperformed the city’s dwindling supply of traditional public schools since the hurricane.
Although I tend strongly toward the viewpoint that whether charter schools “work” is far less important than why – e.g., specific policies and practices – it might nevertheless be useful to quickly address both of the claims above, given all the attention paid to charters in New Orleans. Read More »
In a recent story, the New York Daily News uses the recently-released teacher data reports (TDRs) to “prove” that the city’s charter school teachers are better than their counterparts in regular public schools. The headline announces boldly: New York City charter schools have a higher percentage of better teachers than public schools (it has since been changed to: “Charters outshine public schools”).
Taking things even further, within the article itself, the reporters note, “The newly released records indicate charters have higher performing teachers than regular public schools.”
So, not only are they equating words like “better” with value-added scores, but they’re obviously comfortable drawing conclusions about these traits based on the TDR data.
The article is a pretty remarkable display of both poor journalism and poor research. The reporters not only attempted to do something they couldn’t do, but they did it badly to boot. It’s unfortunate to have to waste one’s time addressing this kind of thing, but, no matter your opinion on charter schools, it’s a good example of how not to use the data that the Daily News and other newspapers released to the public. Read More »
Anyone who wants to start a charter school must of course receive permission, and there are laws and policies governing how such permission is granted. In some states, multiple entities (mostly districts) serve as charter authorizers, whereas in others, there is only one or very few. For example, in California there are almost 300 entities that can authorize schools, almost all of them school districts. In contrast, in Arizona, a state board makes all the decisions.
The conventional wisdom among many charter advocates is that the performance of charter schools depends a great deal on the “quality” of authorization policies – how those who grant (or don’t renew) charters make their decisions. This is often the response when supporters are confronted with the fact that charter results are varied but tend to be, on average, no better or worse than those of regular public schools. They argue that some authorization policies are better than others, i.e., bad processes allow some poorly-designed schools start, while failing to close others.
This argument makes sense on the surface, but there seems to be scant evidence on whether and how authorization policies influence charter performance. From that perspective, the authorizer argument might seem a bit like tautology – i.e., there are bad schools because authorizers allow bad schools to open, and fail to close them. As I am not particularly well-versed in this area, I thought I would look into this a little bit. Read More »