The War On Error

Posted by on December 7, 2010

The debate on the use of value-added models (VAM) in teacher evaluations has reached an impasse of sorts. Opponents of VAM use contend that the imprecision is too high for the measures to be used in evaluation; supporters argue that current systems are inadequate, that all measures entail error but this doesn’t preclude using the estimates. 

This back-and-forth may be missing the mark, and it is not particularly useful in the states and districts that are already moving ahead. The more salient issue, in my view, is less about the amount of error than about how it is dealt with when the estimates are used (along with other measures) in evaluation systems.

Teachers certainly understand that some level of imprecision is inherent in any evaluation method—indeed, many will tell you about colleagues who shouldn’t be in the classroom, but receive good evaluation ratings from principals year after year. Proponents of VAM often point to this tendency of current evaluation systems to give “false positive” ratings as a reason to push forward quickly. But moving so carelessly that we disregard the error in current VAM estimates—and possible methods to reduce its negative impacts—is no different than ignoring false positives in existing systems.

Mostly as a result of random statistical error, value-added estimates are reasonably effective only at identifying those teachers at the extremes (or “tails”) of the performance distribution with any degree of acceptable precision (sometimes gauged using the convention of statistical significance). Depending on the amount of data available, this means we only get strong results—results in which we can have confidence—for about 10-30 percent of teachers (the 5-15 percent top, and the 5-15 percent bottom).  All other teachers should be regarded statistically as no different from average.  Any credible researcher, including staunch VAM advocates like William Sanders, will acknowledge this limitation. 

Interpreting a teacher’s VAM score without examining the error margin is, in many respects, meaningless. For instance, a recent analysis of VAM scores in New York City shows that the average error margin is plus or minus 30 percentile points. That puts the “true score” (which we can’t know) of a 50th percentile teacher at somewhere between the 20th and 80th percentile—an incredible 60 point spread (though, to be fair, the “true score” is much more likely to be 50th percentile than 20th or 80th, and many individual teacher’s error margins are less wide than the average).  If evaluation systems don’t pay any attention to the margin of error, the estimate is little more than a good guess (and often not a very good one at that).

Now, here’s the problem: Many, if not most teacher evaluation systems that include VAM—current, enacted or under consideration—completely ignore this. Many of the systems with which I’m familiar just take VAM estimates at face value, often using them to assign teachers to categories (there are exceptions, such as Hillsborough’s (FL) plan to use three-year cumulative estimates, and most systems are still on the drawing board [please comment if you know of others]).

While the vast majority of these teachers, including many in the top and bottom categories, are actually indistinguishable from average, their scores are being accorded an unwarranted legitimacy, especially when they count for 40-50 percent of teachers’ final evaluations, as is the case in an increasing number of places. Some teachers have even been fired based on evaluations that include heavily-weighted VAM estimates from only one year of data.

To knowingly build this level of imprecision into a system makes no sense, especially when it is unnecessary. VAM estimates can be incorporated into evaluations in a more responsible fashion, one which pays attention to error.

One very simple idea, for example, would be to employ a three-category scheme—above average, average, below average—that very directly accounts for error margins (the threshold for statistical significance might be relaxed a bit). The model used in Tennessee, Ohio, and elsewhere (designed by William Sanders), reports results to teachers/schools in this fashion, but it’s not yet clear whether the same scheme will be used in actual evaluations.

Another, equally simple idea is to set a minimum sample size (i.e., number of students or years) that must be available for a given teacher before the estimates can be incorporated into his or her evaluation. This is essentially what’s happening in Hillsborough.

These and similar methods would, obviously, reduce the number of teachers who get “actionable” estimates, particularly during their first years of teaching.  But they would also go a long way towards reducing (though not eliminating) the alarming degree of imprecision in VAM, much of which stems from little more than random variation. Also, other problems, such as bias from non-random classroom assignment, get better with larger sample sizes

And researchers who support using VAM agree.  For example, a recent paper from the Brookings Institution argued that “any practical application of value-added measures should make use of confidence intervals in order to avoid false precision” (the above-mentioned 60-point spread is a confidence interval).  A recent RAND/CAP report provides similar recommendations.

Regardless of how it is done, accounting for VAM error rates is a critical issue in those states and districts that have already decided to use these estimates as a factor in teacher evaluation. The fact that it is rarely discussed – and may not be part of the design of many systems, new and existing – is very troubling. After years of effort and millions of dollars in investment, we might end up with almost as many false positives and many, many more false negatives—excellent or average teachers who are erroneously identified as subpar. If we’re going to do this, we should at least do it correctly.


13 Comments posted so far

  • How do we correct for teachers that work with consistently high or low students?
    A teacher of gifted students may seem to be adding a lot of value the their students while a Special Ed teacher may not add as much, but I think smaller gains in s SPED class might be more remarkable than barely above average gains in a gifted classroom.

    Comment by Brendan Murphy
    December 7, 2010 at 12:26 PM
  • The problem Brenden Murphy cites may emerge in precisely the opposite fashion as well — for example, G & T kids score so high to begin with that they hit the ceiling immediately, making it look like the teacher is adding little when he/she may be adding a lot…

    Comment by Denis Doyle
    December 10, 2010 at 2:50 PM
  • Brendan and Denis – thanks for your comments. Most VAM models do employ techniques to correct for student “ability” (e.g., by controlling for their prior performance), but you’re certainly correct in pointing out that there are dozens of unmeasurable factors that might bias the results of individual teachers (including, as Brendan notes, non-random assignment of students to teachers).

    In my view, however, this only makes it more important for states and districts that are already moving ahead to address the error that they CAN address, right?

    Comment by Matthew Di Carlo
    December 13, 2010 at 6:59 PM
  • [...] The War On Error [...]

    January 5, 2011 at 3:17 PM
  • [...] The War On Error [...]

    January 10, 2011 at 12:01 PM
  • [...] The War On Error [...]

    January 13, 2011 at 1:01 PM
  • [...] They might have accounted for error margins in assigning teachers effectiveness ratings (as I have discussed before). When confronted with the failure to replicate their results, they might have actually shown [...]

    February 18, 2011 at 12:07 PM
  • [...] selected for retention). This not only assumes that value-added is accurate in any given year (it is not), and is a sufficient measure of “quality,” but it also ignores the fact that the [...]

    February 25, 2011 at 9:01 AM
  • [...] From this perspective, with an eye toward individual-level accuracy, the Times might have proceeded differently. They might have accounted for error margins in assigning teachers effectiveness ratings (as I have discussed before). [...]

    March 3, 2011 at 9:40 AM
  • [...] – including new teacher evaluation systems (with growth model estimates, if used responsibly [not the case in most places]), alternative compensation systems, and new layoff policies. My only requirements are that [...]

    April 13, 2011 at 7:36 PM
  • [...] concerned about the possible consequences of some of these new policies (particularly about their details), as well as about the apparent lack of serious efforts to monitor [...]

    May 17, 2011 at 10:14 AM
  • [...] is that most of the imprecision of value-added estimates stems from random error. Months ago, I lamented the fact that most states and districts incorporating value-added estimates into their teacher [...]

    May 31, 2011 at 1:33 PM
  • [...] [...]

    October 12, 2011 at 9:27 AM

Sorry, the comment form is closed at this time.

Disclaimer

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the shankerblog.org may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.

Banner image adapted from 1975 photograph by Jennie Shanker, daughter of Albert Shanker.