A Quick Look At "Best High School" Rankings

** Reprinted here in the Washington Post

Every year, a few major media outlets publish high school rankings. Most recently, Newsweek (in partnership with The Daily Beast) issued its annual list of the “nation’s best high schools." Their general approach to this task seems quite defensible: To find the high schools that “best prepare students for college."

The rankings are calculated using six measures: graduation rate (25 percent); college acceptance rate (25); AP/IB/AICE tests taken per student (25); average SAT/ACT score (10); average AP/IB/AICE score (10); and the percentage of students enrolled in at least one AP/IB/AICE course (5).

Needless to say, even the most rigorous, sophisticated measures of school performance will be imperfect at best, and the methods behind these lists have been subject to endless scrutiny. However, let's take a quick look at three potentially problematic issues with the Newsweek rankings, how the results might be interpreted, and how the system compares with that published by U.S. News and World Report.

Self-reported data. The data for Newsweek's rankings come from a survey, in which high schools report their results on the six measures above (as well as, presumably, some other basic information, such as enrollment). Self-reported data almost always entail comparability and consistency issues. The methodology document notes that the submissions were “screened to ensure that the data met several parameters of logic and consistency," and that anomalies were identified and the schools contacted for verification. So, this is probably not a big deal, but it's worth mentioning briefly.

Partial, self-selected sample. Newsweek sent their survey to roughly 5,000 schools (I couldn't find any description of how they were chosen). Around 2,500 responded, and these are the schools included in the rankings. There are over 20,000 public high schools in the U.S. (any one them, surveyed or not, can submit their data, but it's not clear how many of the 2,500 were unsolicited). It's therefore more than a little strange to call this list "the nation's best high schools" when it seems to have considered only a small portion of the nation's high schools, and a non-random portion at that - the schools that were surveyed and/or responded may differ in "performance" from those that weren't surveyed or didn't respond. 

Inappropriate interpretation of measures. The six indicators that comprise the rankings are all measures of student performance, not school performance (a distinction we’ve discussed here many times). Schools vary widely in the students they enroll. Every year, some high schools enroll incoming cohorts that are way ahead, whereas other schools must continually catch their students up. To be clear, schools can have a substantial impact on these outcomes, but the raw statistics, such as graduation and college acceptance rates, are predominantly a function of student background and prior schooling inputs (e.g., elementary/middle school effectiveness). Thus, the Newsweek rankings tell you far more about the students enrolled in a high school than the school’s actual impact on those results.*

In other words, Newsweek is not really ranking the high schools that "best prepare their students for college" as much as they're ranking the high schools whose students are best prepared for college (or, more accurately, the high schools whose students are best prepared among the 2,500 or so that responded to the survey). This is one big reason why the top of Newsweek’s list is dominated by high schools that are either selective (e.g., magnets) or located in more affluent neighborhoods (i.e., have low free/reduced-price lunch eligibility rates).

To reiterate, even the best measures are highly imprecise and subject to all sorts of bias; isolating schools' contribution to measured outcomes is extremely difficult. In addition, the utility of school performance measures can vary quite a bit depending on who’s using them and for what purpose. Rankings, including Newsweek's, that are not quite appropriate for use in a formal accountability system might still be useful to, say, a parent choosing schools for his or her children (i.e., parents have an interest in their children being surrounded by high-performing peers).

That said, some publishers of these "best high schools" take a different, in many respects more thorough approach. Most notably, the U.S. News and World Report rankings, the analysis for which is done by the American Institutes for Research, include over 18,000 high schools. They also rely on publicly-available data rather than self-reporting (which is easier given their choice of measures), and part of the process of ranking schools entails assessment of outcomes versus statistical expectations that roughly account for subsidized lunch eligibility (i.e., residual analysis), as well as consideration of the performance of "disadvantaged subgroups." This represents a rough attempt to address differences in the students these schools serve (at least in the first two stages of the three-stage process).

To put it mildly, the U.S. News rankings still require very cautious interpretation (for several reasons, a couple of which pertain to Newsweek's as well). Nevertheless, given the considerable constraints, it is a fairly well-designed system (and one that required a lot of groundwork). None of these "best high school" lists is anywhere near perfect, but some are arguably better than others.

Overall, though, the most important point to bear in mind is that all of these rankings are potentially useful so long as they are presented and interpreted properly. First and foremost, in the case of the Newsweek or similarly constituted lists, there should be much stronger warnings about the sample, which includes only a fraction of the nation's high schools (and a self-selected fraction at that). It's not enough to simply report how many schools were surveyed and how many responded; that should be put in context.

In addition, the results of these systems, like all performance metrics, are to varying degrees confounded by the observed and unobserved characteristics of the students who attend the schools (as well as other factors). Depending on how one conceptualizes school effectiveness, this might be considered very serious bias, and so, at the very least, a more prominent discussion of these issues -- one that is accessible to and likely to be seen by the average reader -- might go a long way toward ensuring better interpretation of the rankings.

- Matt Di Carlo

*****

* To their credit, Newsweek did at least standardize the measures before combining them, which is more than one can say for most states' school rating systems (and for NCLB).