I write often (probably too often) about the difference between measures of school performance and student performance, usually in the context of school rating systems. The basic idea is that schools cannot control the students they serve, and so absolute performance measures, such as proficiency rates, are telling you more about the students a school or district serves than how effective it is in improving outcomes (which is better-captured by growth-oriented indicators).
Recently, I was asked a simple question: Can a school with very high absolute performance levels ever actually be considered a “bad school?”
This is a good question.
For one thing, of course, tests and graduation rates are imperfect measures (especially given how they’re currently used), and so they may be missing a lot. This is certainly the case, but let’s put it aside for the purposes of this discussion.
Say we have an elementary school, located in an affluent neighborhood, whose students score very highly, on average. These kids entered the school way ahead of their peers in poorer areas. During their 5-6 years at this school, each cohort of students maintains a very high performance level, but it’s mostly because of where they started out – they actually make progress that is far lower than that of similar students in comparable schools. Due to the design of most states’ rating systems, this school would probably receive a fairly high grade, or would at least avoid receiving low grade, because the systems tend to weight absolute proficiency measures quite heavily.
Is this wrong? Is this a “low-performing school?”
By the growth-oriented test-based metrics commonly employed in education policy today, including those we use for teachers, yes, it is. These students are “losing ground” relative to similar peers elsewhere (though keep in mind that the estimates from many growth models are relative, not absolute). Sure, virtually all of them will graduate and most will eventually attend four-year college, but that may be largely thanks to their backgrounds – i.e., these outcomes will come about despite, rather than because of, their school’s effectiveness.
That said, yes –I suppose I would call this a “low performing school.”
But I would offer one very important clarification here – this may be a “low-performing school” by a test-based, growth-oriented standard, but that does not necessarily mean it should be subject to costly interventions, whether high stakes (e.g., closure or turnaround) or lower stakes (e.g., additional funding). This, put simply, is because resources are limited, and they are, in my view, best allocated to schools serving students who are most in need of help. This is not the case in our hypothetical school, where students score highly but don’t make progress.
The school should, however, be formally “encouraged” to improve, perhaps via a low-cost plan by which its performance is subject to special monitoring and receives guidance on possible strategies to boost performance.
(Side note: Depending on the availability of resources and the performance of other schools, there may be cases in which schools with strong absolute performance do so poorly on growth metrics that more drastic interventions could be appropriate. But I’m not sure where that line should be drawn.)
One final note that bears mentioning here is that, in most states’ rating systems, a school with high absolute performance and low growth (e.g., our hypothetical school) has much less risk of receiving a low rating than a school in the opposite situation – low absolute performance and strong growth. Again, this reflects the design of these systems, in which absolute performance plays a more dominant role than growth in determining a school’s rating.
- Matt Di Carlo