The Evidence On Charter Schools

Posted by on November 14, 2011

** Also posted here on “Valerie Strauss’ Answer Sheet” in the Washington Post and here on the Huffington Post

This is the first in a series of three posts about charter schools. Here are the second and third parts.

In our fruitless, deadlocked debate over whether charter schools “work,” charter opponents frequently cite the so-called CREDO study (discussed here), a 2009 analysis of charter school performance in 16 states. The results indicated that overall charter effects on student achievement were negative and statistically significant in both math and reading, but both effects sizes were tiny. Given the scope of the study, it’s perhaps more appropriate to say that it found wide variation in charter performance within and between states – some charters did better, others did worse and most were no different. On the whole, the size of the aggregate effects, both positive and negative, tended to be rather small.

Recently, charter opponents’ tendency to cite this paper has been called “cherrypicking.” Steve Brill sometimes levels this accusation, as do others. It is supposed to imply that CREDO is an exception – that most of the evidence out there finds positive effects of charter schools relative to comparable regular public schools.

CREDO, while generally well-done given its unprecedented scope, is a bit overused in our public debate – one analysis, no matter how large or good, cannot prove or disprove anything. But anyone who makes the “cherrypicking” claim is clearly unfamiliar with the research. CREDO is only one among a number of well-done, multi- and single-state studies that have reached similar conclusions about overall test-based impacts.

This is important because the endless back-and-forth about whether charter schools “work” – whether there is something about “charterness” that usually leads to fantastic results – has become a massive distraction in our education debates. The evidence makes it abundantly clear that that is not the case, and the goal at this point should be to look at the schools of both types that do well, figure out why, and use that information to improve all schools.

First, however, it’s important to review the larger body of evidence that corroborates CREDO’s findings. For example, this 2009 RAND analysis of charter schools in five major cities and three states found that, in every location, charter effects were either negative or not discernibly different from regular public schools’. As one might expect, charters tended to get better results the more years they’d been in operation.

Similarly, a 2010 Mathematica report presented the findings from a randomized controlled study of 36 charter middle schools in 15 states. The researchers found that the vast majority of students in these charters did no better or worse than their counterparts in regular public schools in terms of both math and reading scores, as well as virtually all the 35 other outcomes studied. There was, however, underlying variation – e.g., results were more positive for students who stayed in the charters for multiple years, and those who started out with lower scores.

A number of state-specific studies buttress the conclusion of wide variation in charter effects. A paper published in 2006 found slightly negative effects of charters in North Carolina (CREDO’s results for North Carolina were mixed, but essentially found no difference large enough to be meaningful). There was a positive charter impact in this paper using Texas data, but it only surfaced after 2-3 years of attendance, and the effect sizes were very small (this Texas analysis found the same for elementary and middle but not high schools, while CREDO’s evaluation found small negative effects).

A published analysis of charters in Florida found negative effects during these schools’ first five years of attendance, followed by comparable performance thereafter (the reading impact was discernibly higher, but the difference was small; it’s also worth noting that CREDO’s Florida analysis found a small positive effect on charter students after three years of attendance), while a 2005 RAND report on California charters revealed no substantial difference in overall performance (also see here, here and here). Finally, a 2006 study of Idaho schools found moderate positive charter effects, while students attending Arizona charters for 2-3 years had small relative gains, according to a 2001 Goldwater Institute analysis (CREDO found the opposite).

In an attempt to “summarize” the findings of these and a few other studies not discussed above, the latest meta-analysis from the Center for Reinventing Public Education (CRPE) found that charter and regular public school effects were no different in middle school reading and high school reading and math. There were statistically discernible positive impacts in middle school math and elementary school math and reading, but the effect sizes were very modest. The primary conclusion, once again, was that “charters under-perform traditional public schools in some locations, grades, and subjects, and out-perform traditional public schools in other locations, grades, and subjects.” This lines up with prior reviews of the literature.

Finally, just last week, Mathematica and CRPE released a report presenting a large, thorough analysis of charter management organizations (CMOs). In order to be included in the study, CMOs had to be well-established and run multiple schools, which means the schools they run are probably better than the average charter in terms of management and resources. The overall results (middle schools only) were disappointing – even after three years of attendance, there was no significant difference between CMO and comparable regular public school students’ performance in math, reading, science or social studies. Some CMOs’ schools did quite well, but most were no different or worse in terms of their impact.

Unlike some other interventions that dominate today’s education policy debate, most notably test-based teacher evaluations, there is actually a somewhat well-developed literature on charter schools. There are studies almost everywhere these schools exist in sufficient numbers, though it is important to point out that the bulk of this evidence consists of analyses of test scores, which is of course an incomplete picture of “real” student learning (for example, a couple of studies have found positive charter effects on the likelihood of graduating). It also limits many of these evaluations to tested grades.

In general, however, the test-based performance of both charter and regular public school varies widely. When there are differences in relative effects, positive or negative, they tend to be modest at best. There are somewhat consistent results suggesting charters do a bit better with lower-performing students and other subgroups, and that charters improve the longer they operate. But, on the whole, charters confront the same challenges as traditional district schools in meeting students’ diverse needs and boosting performance. There is no test-based evidence for supporting either form of governance solely for its own sake.

So, if there is any “cherrypicking” going on, it is when charter supporters hold up the few studies that find substantial positive effects across a group of schools in the same location. This includes, most notably, very well-done experimental evaluations of charter schools in New York City and Boston (as well as a couple of evaluations of schools run by KIPP, which are dispersed throughout the nation, and a lottery study of a handful of charters in Chicago).**

Ironically, though, it is in these exceptions where the true contribution of charter schools can be found, as they provide the opportunity to start addressing the more important question of why charters produce consistent results in a few places. Similarly, buried in the reports discussed above, and often ignored in our debate, are some hints about which specific policies and practices help explain the wide variation in charter effects. That’s the kind of “cherrypicking” that we need – to help all schools.

I will discuss this research – and what it might mean – in a subsequent post.

- Matt Di Carlo

*****

** It’s true that three of these analyses are among the tiny handful that use random assignment, but there’s little basis for thinking that non-experimental methods would “favor” regular public schools (many would argue the opposite), and non-experimental analyses using data from these locations have reached roughly the same conclusions (e.g., CREDO’s analysis of New York City and this paper on Massachusetts charters).


14 Comments posted so far

  • I still think it’s hiding the ball on the Mathematica 2010 nationwide study to emphasize the no-overall-differences finding, when the subgroup finding is so much more interesting and relevant:

    ” We found that, among the higher-income group (those not certified for free or reduced-price meals), charter school admission had a negative and statistically significant
    effect on Year 1 mathematics scores and Year 2 reading and mathematics scores. . . . Among the lower-income
    group, charter school admission had a positive and significant impact on Year 2 mathematics scores.
    Moreover, the difference in impacts between the higher- and lower-income groups was statistically significant for all outcomes except Year 1 reading scores. The findings suggest that the study charter schools had positive effects in mathematics for more economically disadvantaged students and negative effects in both reading and mathematics for more economically advantaged students.”

    In other words, charter schools helped poor kids but harmed rich kids. Not that harming anyone’s achievement is a good thing, but one can hardly imagine a better way to close the achievement gap.

    So for people who are interested in the achievement gap, it would be spectacularly wrongheaded to think of the Mathematica study as anything but highly supportive evidence.

    Comment by Stuart Buck
    November 14, 2011 at 11:10 AM
  • Stuart,

    I’m not sure how I’m “hiding” anything, given the fact that I mentioned explicitly the findings by prior achievement and duration of attendance.

    MD

    Comment by Matthew Di Carlo
    November 14, 2011 at 11:13 AM
  • Yes, but it deserves more emphasis. From the perspective of someone who keenly cares about the achievement gap, that finding from Mathematica should be the headline of a blog post, not something that is barely mentioned in passing in a single phrase in the middle of a dozen paragraphs devoted to arguing that charters overall make little or no difference.

    Comment by Stuart Buck
    November 14, 2011 at 11:35 AM
  • Asking whether charter schools are any better than traditional publics is like asking whether a “motor boat” will beat a barge in a race across the Atlantic. The correct answer? “It depends.”

    To be clear, a “charter” refers to nothing other than a governance structure. The board of directors of a state-charter typically does not answer to a local BOE. That is the extent of the structural differences between charters and traditionals. Sounds like a small difference, right?

    Well, as a result of that small governmental difference, the charter is not required to hire administrators from the existing system. (There goes patronage politics and typical BOE cronyism) It is also not required to start out with a collective bargaining contract (Out with sclerotic adult centered negotiations over things like bus duty, extended days, and summer school). So a seemingly small change in governance and accountability can make all the difference in the world, or it might mean no change in performance.

    So back to the boat analogy. If its a smallish motor boat with one 10 hp outboard engine, an untrained crew, and not enough fuel, we can expect that the barge however sluggish will belch its way to victory. The 2nd place motorboat may never even make into the harbor.

    If however, the motorboat has 2 200hp outboard engines, an excellent skipper, a trained and motivated crew, and plenty of fuel, well, it wins the race by far.

    The barge will always be the barge. It wasn’t built for speed. It is ludicrous to suggest that it be re-built for speed. It is also ludicrous to conclude that if 50% of motorboats don’t run properly then NO motor boat is any good.

    The fact that 17% of charters outperform their host ditricts is very promising and is the successful conclusion of our 20 year charter school experiment. Next steps? Close the bad charters down. And replicate the great ones a rapidly as possible.

    Comment by Jeff Klaus
    November 14, 2011 at 3:50 PM
  • Uh, Jeff…? A motorboat race isn’t a good metaphor for education. How many passengers does your motorboat have to toss overboard to reach the speed it was “built for”?

    Schools are built to educate the children in them.

    The notion that “patronage politics” is out the window for charters is even further from reality. Bloomberg has built an empire of patronage with his charter awards in New York City, for instance.

    The Grassroots Education Movement has released a powerful documentary film exploring the real impact of charters on education in NYC. People are showing it all over the country, and you can get a copy and show it yourself to any honest groups of people who want to understand how charters can also work to damage communities in their competitive “races” to the public trough. Just click, “order DVD”
    http://www.waitingforsupermantruth.org/

    Matt, I Googled around but I couldn’t find anyplace where you’ve discussed the film, or GEM. You’ve seen it, haven’t you? You should, because you did take a leading role in confronting the false narrative of the original Superman propaganda film. I especially hope you will discuss it up there, under your Shanker Institute byline, where such discussions belong (don’t they?)

    Also, take a look at Stan Karp’s recent post on Common Dreams:
    http://www.commondreams.org/view/2011/10/25-1

    I’ll quote at length from hopeful sign #3:

    “The two large teacher unions, the AFT & the NEA, have had mostly weak and defensive responses to the policy attacks of the past few years. But they are being pressed by both their members and by reality to develop more effective responses… Years of failing to effectively mobilize their membership or develop effective responses to school failure in poor communities have taken a big toll on the ability of our unions to lead the charge in defending public education. But their role remains crucial and activists have begun to rebuild that power on the basis of new politics and new coalitions with the communities schools serve.”

    There’s a conversation going on about the heart and soul of the union movement, and the Shanker Blog should be part of it. You should be part of it, and in fact
    you already are. Step forward, and talk about it.

    Comment by Mary Porter
    November 14, 2011 at 5:30 PM
  • By the way, Stuart, as your quoted passage notes, there was no effect on reading scores of low-income students, so the statement that these charters “helped poor kids” is really only half true.

    The negative effects for non-poor students were, in contrast, significant and meaningful in both subjects (so you’re right – this might still close the achievement gap in reading).

    It’s also important to keep in mind that these are relative and not absolute effects.

    Comment by Matthew Di Carlo
    November 15, 2011 at 12:06 AM
  • So what? Oodles of educational interventions affect math scores more than reading.

    Comment by Stuart Buck
    November 15, 2011 at 12:48 PM
  • Matthew;

    I’m not sure what you mean by “very well done” studies in New York. For one thing, test scores in New York have been manipulated to make the schools (all schools) look to be better performing than has been the reality. Does your statement account for that?

    Did you mean the Hoxby studies? her work has been chewed over and spit out by the very CREDO you have mentioned. This constituted a “peer review” that Hoxby rarely subjects her work to. CREDO found Hoxby’s methodolgy to be wanting in several respects. Stirred up quiet a little controversy in the so, so respectable halls of Stanford.

    It should be mentione too, that CREDO is not just at Stanford. CREDO is part of the right-leaning (totally bent over, actually) Hoover Institution. The computers must have been melting down the day the CREDO charter study was released. The right-wing, pro-charter/voucher, foundations that support Hoover could not have been pleased with those conclusions and some nasty emails must have been sent.

    Comment by Gary Ravani
    November 15, 2011 at 6:34 PM
  • The charge of data cherry-picking is valid against those, like Ravitch, who quote the CREDO study as proof that there is no difference overall between charters and non-charters without going into a tad more detail.

    Noting that CREDO finds charters help poor kids is not a itty-bitty, minor little nit. Helping the poor is the big-deal purpose of charter schools. A little more reading finds that it is the “no-excuses” charter schools like KIPP that have the greatest positive effect. Those schools basically focus on getting poor kids to adopt the learning habits of middle class kids – do your homework, don’t cut class, work in class instead of fooling around, and show respect for yourself and others.

    The charters for the middle class kids tend to be the “holistic, portfolio-assessment, child-centered creativity” sort of things, so beloved by the squishier anti-reform folks. Noting that they don’t raise test scores doesn’t mean much because that is not why they exist. They are the reaction of some of the middle class to multiple-choice driven assessment. Apples and oranges.

    You can’t responsibly discuss the validity of charters without mentioning the dramatic success of KIPP in New Orleans in raising New Orleans Public Schools (90% poor and black) by about 40% raising New Orleans from 70th to 69th in Louisiana. Charters there serve 71% of the students in public schools which has risen from 56% four years ago. There is a detailed analysis of them at:

    http://educationnext.org/new-schools-in-new-orleans/

    New Orleans is very important because it shows that charters do not rely on cream-skimming the select few who would do better anyway, but can work with the entire spectrum that the public schools serve. Nor do they serve relatively small numbers of behavior problem and special needs kids. In New Orleans, they actually serve a higher proportion of those kids than the regular public schools. You might want to read that analysis before further discussion of charter schools.

    If the regular public schools would quit putting roadblocks in their way, charter schools would take the poor minority kids that public schools currently fail and help those kids make an improvement in their lives.

    Comment by Michael G.
    November 16, 2011 at 3:12 PM
  • [...] The Evidence On Charter Schools [...]

    November 22, 2011 at 1:30 AM
  • [...] The Evidence On Charter Schools [...]

    November 25, 2011 at 1:00 AM
  • I really appreciate the thoughtful analysis of the charter research. I also appreciate your focus on trying to identify the underlying reasons for school success, regardless of governance model. It is no doubt important to look at actual practice, but the question of governance model still needs to be addressed. Some states are still grappling with whether to adopt charter laws and states with charter laws may want to improve them (or get rid of them). Test-based evidence is important and there are still questions in this area. There are also other factors to consider such as diversity/segregation, how well all students are served (especially special needs and ELL students), other student outcomes, and how well the governance models promote democratic involvement in schools and society.

    I would love to hear a broader discussion from this site on these other questions.

    Comment by Demian
    December 22, 2011 at 4:50 PM
  • One thing I don’t see mentioned much is how much per pupil spending correlates with charter performance. The recent Mathematica study mentioned above showed wide variation in both test scores AND per pupil spending (from just over $5K to just over $20K per pupil), but surprisingly I don’t see any analysis done on this in the study.

    Comment by Demian
    December 22, 2011 at 5:01 PM
  • [...] suit student advancement and improve teaching skills. But the data supporting this viewpoint is highly mixed, both in Chicago and [...]

    September 20, 2012 at 2:39 PM

Sorry, the comment form is closed at this time.

Disclaimer

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the shankerblog.org may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.

Banner image adapted from 1975 photograph by Jennie Shanker, daughter of Albert Shanker.