Some of the best research out there is a product not of sophisticated statistical methods or complex research designs, but rather of painstaking manual data collection. A good example is a recent paper by Morgan Polikoff, Andrew McEachin, Stephani Wrabel and Matthew Duque, which was published in the latest issue of the journal Educational Researcher.
Polikoff and his colleagues performed a task that makes most of the rest of us cringe: They read and coded every one of the over 40 state applications for ESEA flexibility, or “waivers.” The end product is a simple but highly useful presentation of the measures states are using to identify “priority” (low-performing) and “focus” (schools “contributing to achievement gaps”) schools. The results are disturbing to anyone who believes that strong measurement should guide educational decisions.
There’s plenty of great data and discussion in the paper, but consider just one central finding: How states are identifying priority (i.e., lowest-performing) schools at the elementary level (the measures are of course a bit different for secondary schools).
There are 42 states with accepted waiver applications. Out of these 42, 17 exclusively use some version of proficiency or other cutpoint-based rates to identify priority schools. Another 23 employ a composite index consisting of different measures, but in most of these indexes, proficiency still plays the dominant role. Finally, another two states identify priority schools the same way they do SIG schools.
So, put simply, the vast majority of states that have had their waiver applications accepted are still relying predominantly or completely on absolute performance, most commonly proficiency rates, to identify low-performing schools. In other words, for reasons that I won’t repeat but have discussed here many times, most states are still choosing their lowest performing schools based largely on a measure that should not be used as an indicator of school performance.
(Two quick side notes: First, the results are similar for how states are identifying focus schools; second, regardless of how they choose focus and priority schools, all states must still select “annual measurable objectives,” which in almost all cases consist entirely of proficiency targets.)
In my opinion, this is a central problem with the ESEA waivers – in most (but not all) cases, they are perpetuating the same deeply flawed measures that have poisoned school accountability policy and debate for over a decade.
Granted, some states’ waivers offer glimmers of hope: A bunch are moving in a different direction, most notably by adopting growth-based measures that, however imperfectly, actually make some attempt to isolate schools’ contribution to student performance (ironically, by controlling for the very same absolute performance measures that most states are using directly). In addition, these alternative measures require considerable infrastructure (e.g., data collection and analysis) and up-front investment, and we shouldn’t expect the transition to occur overnight. Finally, of course, accountability measures need not be perfect to spur productive change in schools, and there is a role for absolute performance measures in these systems, depending on how they are used.
That said, the public debate about the waivers is almost entirely focused on other topics, such as whether the targets are sufficiently “rigorous,” the choice of subgroups, etc. These are, without question, important issues. But there has been very little acknowledgment of the more basic fact that the manner in which we are measuring school performance, regardless of the targets set or the student subgroups to which it is applied, is fundamentally invalid for that purpose. Until this changes, the school accountability movement in this country will never be able to achieve its intended goals.
- Matt Di Carlo