One of the purely presentational aspects that separates the new “generation” of CREDO charter school analyses from the old is that the more recent reports convert estimated effect sizes from standard deviations into a “days of learning” metric. You can find similar approaches in other reports and papers as well.
I am very supportive of efforts to make interpretation easier for those who aren’t accustomed to thinking in terms of standard deviations, so I like the basic motivation behind this. I do have concerns about this particular conversion — specifically, that it overstates things a bit — but I don’t want to get into that issue. If we just take CREDO’s “days of learning” conversion at face value, my primary, far more simple reaction to hearing that a given charter school sector’s impact is equivalent to a given number of additional “days of learning” is to wonder: Does this charter sector actually offer additional “days of learning,” in the form of longer school days and/or years?
This matters to me because I (and many others) have long advocated moving past the charter versus regular public school “horserace” and trying to figure out why some charters seem to do very well and others do not. Additional time is one of the more compelling observable possibilities, and while they’re not perfectly comparable, it fits nicely with the “days of learning” expression of effect sizes. Take New York City charter schools, for example. Read More »
One of the (many) education reform proposals that has received national attention over the past few years is “extended learning time” – that is, expanding the day and/or year to give students more time in school.
Although how schools use the time they have with students, of course, is not necessarily more or less important than how much time they have with those students, the proposal to expand the school day/year may have merit, particularly for schools and districts serving larger proportions of students who need to catch up. I have noticed that one of the motivations for the extended time push is the (correct) observation that the charter school models that have proven effective (at least by the standard of test score gains) utilize extended time.
On the one hand, this is a good example of what many (including myself) have long advocated – that the handful of successful charter school models can potentially provide a great deal of guidance for all schools, regardless of their governance structure. On the other hand, it is also important to bear in mind that many of the high-profile charter chains that receive national attention don’t just expand their school years by a few days or even a few weeks, as has been proposed in several states. In many cases, they extend it by months. Read More »
One of the most common claims against charter schools is that they “push out” special education students. The basic idea is that charter operators, who are obsessed with being able to show strong test results and thus bolster their reputations and enrollment, subtlety or not-so-subtlety “counsel out” students with special education plans (or somehow discourage their enrollment).
This is, of course, a serious issue, one that is addressed directly in a recent report from the Center for Reinventing Public Education (CRPE), which presents an analysis of data from a sample of New York City charter elementary schools (and compares them to regular public schools in the city). It is important to note that many of the primary results of this study, including those focused on the “pushing out” issue, cannot be used to draw any conclusions about charters across the nation. There were only 25 NYC charters included in that (lottery) analysis, all of them elementary schools, and these were not necessarily representative of the charter sector in the city, to say nothing of charters nationwide.
That said, the report, written by Marcus Winters, finds, among other things, that charters enroll a smaller proportion of special education students than regular public schools (as is the case elsewhere), and that this is primarily because these students are less likely to apply for entrance to charters (in this case, in kindergarten) than their regular education peers. He also presents results suggesting that this gap actually grows in later grades, mostly because of charters being less likely to classify students as having special needs, and more likely to reclassify them as not having special needs once they have been put on a special education plan (whether or not these classifications and declassifications are appropriate is of course not examined in this report). Read More »
In the three most discussed and controversial areas of market-based education reform – performance pay, charter schools and the use of value-added estimates in teacher evaluations – 2013 saw the release of a couple of truly landmark reports, in addition to the normal flow of strong work coming from the education research community (see our reviews from 2010, 2011 and 2012).*
In one sense, this building body of evidence is critical and even comforting, given not only the rapid expansion of charter schools, but also and especially the ongoing design and implementation of new teacher evaluations (which, in many cases, include performance-based pay incentives). In another sense, however, there is good cause for anxiety. Although one must try policies before knowing how they work, the sheer speed of policy change in the U.S. right now means that policymakers are making important decisions on the fly, and there is great deal of uncertainty as to how this will all turn out.
Moreover, while 2013 was without question an important year for research in these three areas, it also illustrated an obvious point: Proper interpretation and application of findings is perhaps just as important as the work itself. Read More »
Having taken a look at several states’ school rating systems (see our posts on the systems in IN, OH, FL and CO), I thought it might be interesting to examine a system used by a group of charter schools – starting with the system used by charters in the District of Columbia. This is the third year the DC charter school board has released the ratings.
For elementary and middle schools (upon which I will focus in this post*), the DC Performance Management Framework (PMF) is a weighted index composed of: 40 percent absolute performance; 40 percent growth; and 20 percent what they call “leading indicators” (a more detailed description of this formula can be found in the second footnote).** The index scores are then sorted into one of three tiers, with Tier 1 being the highest, and Tier 3 the lowest.
So, these particular ratings weight absolute performance – i.e., how highly students score on tests – a bit less heavily than do most states that have devised their own systems, and they grant slightly more importance to growth and alternative measures. We might therefore expect to find a somewhat weaker relationship between PMF scores and student characteristics such as free/reduced price lunch eligibility (FRL), as these charters are judged less predominantly on the students they serve. Let’s take a quick look. Read More »
One of the (many) factors that might help explain — or at least be associated with — the wide variation in charter schools’ test-based impacts is market share. That is, the proportion of students that charters serve in a given state or district. There are a few reasons why market share might matter.
For example, charter schools compete for limited resources, including private donations and labor (teachers), and fewer competitors means more resources. In addition, there are a handful of models that seem to get fairly consistent results no matter where they operate, and authorizers who are selective and only allow “proven” operators to open up shop might increase quality (at the expense of quantity). There may be a benefit to very slow, selective expansion (and smaller market share is a symptom of that deliberate approach).
One way to get a sense of whether market share might matter is simply to check the association between measured charter performance and coverage. It might therefore be interesting, albeit exceedingly simple, to use the recently-released CREDO analysis, which provides state-level estimates based on a common analytical approach (though different tests, etc.), for this purpose. Read More »
A new report from CREDO on charter schools’ test-based performance received a great deal of attention, and rightfully so – it includes 27 states, which together serve 95 percent of the nation’s charter students.
The analysis as a whole, like its predecessor, is a great contribution. Its sheer scope, as well as a few specific parts (examination of trends), are new and important. And most of the findings serve to reaffirm the core conclusions of the existing research on charters’ estimated test-based effects. Such an interpretation may not be particularly satisfying to charter supporters and opponents looking for new ammunition, but the fact that this national analysis will not settle anything in the contentious debate about charter schools once again suggests the need to start asking a different set of questions.
Along these lines, as well as others, there are a few points worth discussing quickly. Read More »
If you ask a charter school supporter why charter schools tend to exhibit inconsistency in their measured test-based impact, there’s a good chance they’ll talk about authorizing. That is, they will tell you that the quality of authorization laws and practices — the guidelines by which charters are granted, renewed and revoked — drives much and perhaps even most of the variation in the performance of charters relative to comparable district schools, and that strengthening these laws is the key to improving performance.
Accordingly, a recently-announced campaign by the National Association of Charter School Authorizers aims to step up the rate at which charter authorizers close “low-performing schools” and are more selective in allowing new schools to open. In addition, a recent CREDO study found (among other things) that charter middle and high schools’ performance during their first few years is more predictive of future performance than many people may have thought, thus lending support to the idea of opening and closing schools as an improvement strategy.
Below are a few quick points about the authorization issue, which lead up to a question about the relationship between selectivity and charter sector growth. Read More »
A recent Mathematica report on the performance of KIPP charter schools expands and elaborates on their prior analyses of these schools’ (estimated) effects on average test scores and other outcomes (also here). These findings are important and interesting, and were covered extensively elsewhere.
As is usually the case with KIPP, the results stirred the full spectrum of reactions. To over-generalize a bit, critics sometimes seem unwilling to acknowledge that KIPP’s results are real no matter how well-documented they might be, whereas some proponents are quick to use KIPP to proclaim a triumph for the charter movement, one that can justify the expansion of charter sectors nationwide.
Despite all this controversy, there may be more opportunity for agreement here than meets the eye. So, let’s try to lay out a few reasonable conclusions and see if we might find some of that common ground. Read More »
A recent article in Reuters, one that received a great deal of attention, sheds light on practices that some charter schools are using essentially to screen students who apply for admission. These policies include requiring long and difficult applications, family interviews, parental contracts, and even demonstrations of past academic performance.
It remains unclear how common these practices might be in the grand scheme of things, but regardless of how frequently they occur, most of these tactics are terrible, perhaps even illegal, and should be stopped. At the same time, there are two side points to keep in mind when you hear about charges such as these, as well as the accusations (and denials) of charter exclusion and segregation that tend to follow.
The first is that some degree of (self-)sorting and segregation of students by abilities, interests and other characteristics is part of the deal in a choice-based system. The second point is that screening and segregation are most certainly not unique to charter/private schools, and one primary reason is that there is, in a sense, already a lot of choice among regular public schools. Read More »
Among the more persistent arguments one hears in the debate over charter schools is that the “best evidence” shows charters are more effective. I have discussed this issue before (as have others), but it seems to come up from time to time, even in mainstream media coverage.
The basic point is that we should essentially dismiss – or at least regard with extreme skepticism – the two dozen or so high-quality “non-experimental” studies, which, on the whole, show modest or no differences in test-based effectiveness between charters and comparable regular public schools. In contrast, “randomized controlled trials” (RCTs), which exploit the random assignment of admission lotteries to control for differences between students, tend to yield positive results. Since, so the story goes, the “gold standard” research shows that charters are superior, we should go with that conclusion.
RCTs, though not without their own limitations, are without question powerful, and there is plenty of subpar charter research out there. That said, however, the “best evidence” argument is not particularly compelling (and it’s also a distraction from the positive shift away from obsessing about whether charters do or don’t work toward an examination of why). A full discussion of the methodological issues in the charter school literature would be long and burdensome, but it might be helpful to lay out three very basic points to bear in mind when you hear this argument. Read More »
** Reprinted here in the Washington Post
2012 was another busy year for market-based education reform. The rapid proliferation of charter schools continued, while states and districts went about the hard work of designing and implementing new teacher evaluations that incorporate student testing data, and, in many cases, performance pay programs to go along with them.
As in previous years (see our 2010 and 2011 reviews), much of the research on these three “core areas” – merit pay, charter schools, and the use of value-added and other growth models in teacher evaluations – appeared rather responsive to the direction of policy making, but could not always keep up with its breakneck pace.*
Some lag time is inevitable, not only because good research takes time, but also because there’s a degree to which you have to try things before you can see how they work. Nevertheless, what we don’t know about these policies far exceeds what we know, and, given the sheer scope and rapid pace of reforms over the past few years, one cannot help but get the occasional “flying blind” feeling. Moreover, as is often the case, the only unsupportable position is certainty. Read More »
** Reprinted here in the Washington Post
Charter school “caps” are state-imposed limits on the size or growth of charter sectors. Currently, around 25 states set caps on schools or enrollment, with wide variation in terms of specifics: Some states simply set a cap on the number of schools (or charters in force); others limit annual growth; and still others specify caps on both growth and size (there are also a few places that cap proportional spending, coverage by individual operators and other dimensions).
A great many charter school supporters strongly support the lifting of these restrictions, arguing that they prevent the opening of high-quality schools. This is, of course, an oversimplification at best, as lifting caps could just as easily lead to the proliferation of the many unsuccessful charters. If the charter school experiment has taught us anything, it’s that these schools are anything but sure bets, and that even includes the tiny handful of highly successful models such as KIPP.*
Overall, the only direct impact of charter caps is to limit the potential size or growth of a state’s charter school sector. Assessing their implications for quality, on the other hand, is complicated, and there is every reason to believe that the impact of caps, and thus the basis of arguments for lifting them, varies by context – including the size and quality of states’ current sectors, as well as the criteria by which low-performing charters are closed and new ones are authorized. Read More »
The issue of student attrition at KIPP and charter schools is never far beneath the surface of our education debates. KIPP’s critics claim that these schools exclude or “counsel out” students who aren’t doing well, thus inflating student test results. Supporters contend that KIPP schools are open admission with enrollment typically determined by lottery, and they usually cite a 2010 Mathematica report finding strong results among students in most (but not all) of 22 KIPP middle schools, as well as attrition rates that were no higher, on average, than at the regular public schools to which they are compared.*
As I have written elsewhere, I am persuaded that student attrition cannot explain away the gains that Mathematica found in the schools they examined (though I do think peer effects of attrition without replacement may play some role, which is a very common issue in research of this type).
But, beyond this back-and-forth over the churn in these schools and whether it affected the results of this analysis, there’s also a confusion of sorts when it comes to discussions of student attrition in charters, whether KIPP or in general. Supporters of school choice often respond to “attrition accusations” by trying to deny or downplay its importance or frequency. This, it seems to me, ignores an obvious point: Within-district attrition – students changing schools, often based on “fit” or performance - is a defining feature of school choice, not an aberration. Read More »
There have now been several stories in the New York news media about New York City’s charter schools’ “gains” on this year’s state tests (see here, here, here, here and here). All of them trumpeted the 3-7 percentage point increase in proficiency among the city’s charter students, compared with the 2-3 point increase among their counterparts in regular public schools. The consensus: Charters performed fantastically well this year.
In fact, the NY Daily News asserted that the “clear lesson” from the data is that “public school administrators must gain the flexibility enjoyed by charter leaders,” and “adopt [their] single-minded focus on achievement.” For his part, Mayor Michael Bloomberg claimed that the scores are evidence that the city should expand its charter sector.
All of this reflects a fundamental misunderstanding of how to interpret testing data, one that is frankly a little frightening to find among experienced reporters and elected officials.
Read More »