As states’ continue to finalize their applications for ESEA/NCLB “flexibility” (or “waivers”), controversy has arisen in some places over how these plans set proficiency goals, both overall and for demographic subgroups (see our previous post about the situation in Virginia).
One of the underlying rationales for allowing states to establish new targets (called “annual measurable objectives,” or AMOs) is that the “100 percent” proficiency goals of NCLB were unrealistic. Accordingly, some (but not all) of the new plans have set 2017-18 absolute proficiency goals that are considerably below 100 percent, and/or lower for some subgroups relative to others. This shift has generated pushback from advocates, most recently in Florida, who believe that lowering state targets is tantamount to encouraging or accepting failure.
I acknowledge the central role of goals in any accountability system, but I would like to humbly suggest that this controversy, over where and how states set proficiency targets for 2017-18, may be misguided. There are four reasons why I think this is the case (and one silver lining if it is).
One – the targets are averages that aren’t very meaningful at the aggregate, state level, since they’re enforced at the school- and district-levels. For example, Florida is shooting for an 83 percent proficiency rate in reading by 2018, an increase of 26 percentage points over their current rate. But the increase required to hit that target will have to be vastly different for different schools and districts. Some of them – particularly those serving more advantaged students – are already at or above these targets. Others are so far away that they’ll need to increase their rates two or three times over in order to meet the goals. It’s kind of beside-the-point to debate whether average statewide targets are adequately “ambitious” when, in reality, they are very ambitious for some schools and not at all so for others, depending on where they are now.
Two – pressure is already high, and so the incentives of modestly higher targets may not be particularly powerful. One big idea for setting higher expectations is to incentivize improvement (e.g., innovation, effort) among teachers and administrators. Yet, particularly in lower-scoring schools and districts, it’s difficult to believe that the pressure to boost test scores can increase too far beyond current, high levels (and, if it can increase, it probably shouldn’t). Let’s remember that schools have for ten years been at the business end of the test-based accountability gun. The pressure is there; teachers and administrators feel it, every day. It may not be particularly compelling to argue that a school with a current rate of 40 percent is going to respond much differently to a 95 percent target compared with 85 percent.
Three – the goals matter far less than how they will be achieved. One often hears people discussing the AMOs as if they’re shopping for outcomes they can predetermine. For example, you’ll hear statements like “these AMOs won’t close the achievement gap” or “the targets won’t make all students proficient.” But, as we all know, the targets themselves are ends, not means. It’s the incentives and policy decisions attached to them that do the work, and they must be calibrated with the goals. Thus, any discussion of what the situation will be if the targets are achieved – or how “ambitious” they are – is decontextualized without an accompanying examination of what strategies and resources are being devoted toward the achievement of those goals. If, for example, a state is expecting a huge increase in proficiency from a low-performing subgroup, but isn’t providing adequate resources or policy guidance to attain this goal, then setting high expectations may not work, and might even have negative consequences. To some extent, the AMOs are acting as a distraction from the much more important question of how states are doing to support the means that schools and districts will need to accomplish these ends.
Four – the crude measures themselves haven’t changed, only the goals. The AMOs are still essentially the same measures as used by NCLB/AYP: changes in cross-sectional proficiency rates. As I’ve discussed many times, they are crude, highly error-prone, and often not particularly useful in judging school performance, per se. In fact, most states are in the process of implementing new and better test-based measures – growth models – which, while still imperfect (as is any indicator), at least make some attempt to control for non-school factors (they also use longitudinal data, which is crucial). So long as the AMOs employ the same flawed underlying assumptions as AYP, this whole discussion will proceed based on the questionable assumption that the data are up to the task of measuring performance.
There is, however, one (somewhat ironic) reason why the debate over AMOs might be useful – I think we desperately need to have a discussion about realistic expectations, as well as about the difference between absolute performance versus growth in gauging that progress.
For example, in Florida, the target rates for all students are around 50 percent higher than current levels, and 50-100 percent higher for most subgroups (including an increase of 145 percent for students with disabilities). And that’s over a five-year period. For any school, district, or state to increase its proficiency rate by 50-75 percent over five years is spectacular to the point where it should arouse suspicion. And yet Florida’s goals are being criticized as too low.
This is mostly because some people, particularly those putting forth the criticism that goals are insufficiently ambitious, are focused on the ends – that is, the fact that the goals are all lower than 100 percent and/or could be viewed as a codification of differential achievement targets for different student groups. Others, in contrast, are focused on the “distance” between the starting and ending points – the fact that current rates, particularly among lower-performing subgroups, are so far below 100 percent that many schools would have to work miracles to achieve that goal over the stated time period.
Thus, while I believe the AMO debate is off the mark, I also hope that this debate will, at long last, help to to illustrate the distinction between absolute performance levels (how highly students score) and “growth” (how quickly students improve). The conflation of these two types of measures – and the blurring of the boundary between student and school performance that they entail – has poisoned education policy making and debate for many years. The current controversy over AMOs is a highly visible instance of this conflation, but it is only the latest.
I am also hopeful that the discussion will help bring about the realization that the targets set by school accountability systems are not supposed to serve as statements about the potential and ability of children. (For example, setting a goal of 75 percent proficiency in five years is not the same thing as saying that only three out of four children can be successful.)
So I think it’s a good thing that we’re finally having a debate in which one “side” is trying to point out that progress is slow and, like it or not, there’s a long way to go in many places. And, in this case, attention to how we propose to get there may just be more important than the definition of where “there” is.
- Matt Di Carlo