A Quick Look At The ASA Statement On Value-Added

Posted by on August 26, 2014

Several months ago, the American Statistical Association (ASA) released a statement on the use of value-added models in education policy. I’m a little late getting to this (and might be repeating points that others made at the time), but I wanted to comment on the statement, not only because I think it’s useful to have ASA add their perspective to the debate on this issue, but also because their statement seems to have become one of the staple citations for those who oppose the use of these models in teacher evaluations and other policies.

Some of these folks claimed that the ASA supported their viewpoint – i.e., that value-added models should play no role in accountability policy. I don’t agree with this interpretation. To be sure, the ASA authors described the limitations of these estimates, and urged caution, but I think that the statement rather explicitly reaches a more nuanced conclusion: That value-added estimates might play a useful role in education policy, as one among several measures used in formal accountability systems, but this must be done carefully and appropriately.*

Much of the statement puts forth the standard, albeit important, points about value-added (e.g., moderate stability between years/models, potential for bias, etc.). But there are, from my reading, three important takeaways that bear on the public debate about the use of these measures, which are not always so widely acknowledged. Read More »


No Teacher Is An Island: The Role Of Social Relations In Teacher Evaluation

Posted by on August 19, 2014

Our guest authors today are Alan J. Daly, Professor and Chair of Education Studies at the University of California San Diego, and Kara S. Finnigan, Associate Professor at the Warner School of Education at the University of Rochester. Daly and Finnigan have published numerous articles on social network analysis in education and recently co-edited Using Research Evidence in Education: From the Schoolhouse Door to Capitol Hill (Springer, 2014), which explores the use and diffusion of different types of evidence across levels of the educational system.

Teacher evaluation is a hotly contested topic, with vigorous debate happening around issues of testing, measurement, and what is considered ‘important’ in terms of student learning, not to mention the potential high stakes decisions that may be made as a result of these assessments.  At its best, this discussion has reinvigorated a national dialogue around teaching practice and research; at its worst it has polarized and entrenched stakeholder groups into rigid camps. How is it we can avoid the calcification of opinion and continue a constructive dialogue around this important and complex issue?

One way, as we suggest here, is to continue to discuss alternatives around teacher evaluation, and to be thoughtful about the role of social interactions in student outcomes, particularly as it relates to the current conversation around valued added models. It is in this spirit that we ask: Is there a ‘social side’ to a teacher’s ability to add value to their students’ growth and, if so, what are the implications for current teacher evaluation models? Read More »


Differences In DC Teacher Evaluation Ratings By School Poverty

Posted by on August 12, 2014

In a previous post, I discussed simple data from the District of Columbia Public Schools (DCPS) on teacher turnover in high- versus lower-poverty schools. In that same report, which was issued by the D.C. Auditor and included, among other things, descriptive analyses by the excellent researchers from Mathematica, there is another very interesting table showing the evaluation ratings of DC teachers in 2010-11 by school poverty (and, indeed, DC officials deserve credit for making these kinds of data available to the public, as this is not the case in many other states).

DCPS’ well-known evaluation system (called IMPACT) varies between teachers in tested versus non-tested grades, but the final ratings are a weighted average of several components, including: the teaching and learning framework (classroom observations); commitment to the school community (attendance at meetings, mentoring, PD, etc.); schoolwide value-added; teacher-assessed student achievement data (local assessments); core professionalism (absences, etc.); and individual value-added (tested teachers only).

The table I want to discuss is on page 43 of the Auditor’s report, and it shows average IMPACT scores for each component and overall for teachers in high-poverty schools (80-100 percent free/reduced-price lunch), medium poverty schools (60-80 percent) and low-poverty schools (less than 60 percent). It is pasted below. Read More »


Lost In Citation

Posted by on July 31, 2014

The so-called Vergara trial in California, in which the state’s tenure and layoff statutes were deemed unconstitutional, already has its first “spin-off,” this time in New York, where a newly-formed organization, the Partnership for Educational Justice (PEJ), is among the organizations and entities spearheading the effort.

Upon first visiting PEJ’s new website, I was immediately (and predictably) drawn to the “Research” tab. It contains five statements (which, I guess, PEJ would characterize as “facts”). Each argument is presented in the most accessible form possible, typically accompanied by one citation (or two at most). I assume that the presentation of evidence in the actual trial will be a lot more thorough than that offered on this webpage, which seems geared toward the public rather than the more extensive evidentiary requirements of the courtroom (also see Bruce Baker’s comments on many of these same issues surrounding the New York situation).

That said, I thought it might be useful to review the basic arguments and evidence PEJ presents, not really in the context of whether they will “work” in the lawsuit (a judgment I am unqualified to make), but rather because they’re very common, and also because it’s been my observation that advocates, on both “sides” of the education debate, tend to be fairly good at using data and research to describe problems and/or situations, yet sometimes fall a bit short when it comes to evidence-based discussions of what to do about them (including the essential task of acknowledging when the evidence is still undeveloped). PEJ’s five bullet points, discussed below, are pretty good examples of what I mean. Read More »


Do Students Learn More When Their Teachers Work Together?

Posted by on July 17, 2014

This is the second post in a series on “The Social Side Of Reform”, exploring the idea that relationships, social capital, and social networks matter in lasting, systemic educational improvement. For more on this series, click here.

Debates about how to improve educational outcomes for students often involve two ‘camps’: Those who focus on the impact of “in-school factors” on student achievement; and those who focus on “out-of-school factors.” There are many in-school factors discussed but improving the quality of individual teachers (or teachers’ human capital) is almost always touted as the main strategy for school improvement. Out-of-school factors are also numerous but proponents of this view tend toward addressing broad systemic problems such as poverty and inequality.

Social capital — the idea that relationships have value, that social ties provide access to important resources like knowledge and support, and that a group’s performance can often exceed that of the sum of its members — is something that rarely makes it into the conversation. But why does social capital matter?

Research suggests that teachers’ social capital may be just as important to student learning as their human capital. In fact, some studies indicate that if school improvement policies addressed teachers’ human and social capital simultaneously, they would go a long way toward mitigating the effects of poverty on student outcomes. Sounds good, right? The problem is: Current policy does not resemble this approach. Researchers, commentators and practitioners have shown and lamented that many of the strategies leveraged to increase teachers’ human capital often do so at the expense of eroding social capital in our schools. In other words, these approaches are moving us one step forward and two steps back. Read More »


The Language Of Teacher Effectiveness

Posted by on July 10, 2014

There is a tendency in education circles these days, one that I’m sure has been discussed by others, and of which I myself have been “guilty,” on countless occasions. The tendency is to use terms such “effective/ineffective teacher” or “teacher performance” interchangeably with estimates from value-added and other growth models.

Now, to be clear, I personally am not opposed to the use of value-added estimates in teacher evaluations and other policies, so long as it is done cautiously and appropriately (which, in my view, is not happening in very many places). Moreover, based on my reading of the research, I believe that these estimates can provide useful information about teachers’ performance in the classroom. In short, then, I am not disputing whether value-added scores should be considered to be one useful proxy measure for teacher performance and effectiveness (and described as such), both formally and informally.

Regardless of one’s views on value-added and its policy deployment, however, there is a point at which our failure to define terms can go too far, and perhaps cause confusion. Read More »


Teachers And Education Reform, On A Need To Know Basis

Posted by on July 1, 2014

A couple of weeks ago, the website Vox.com published an article entitled, “11 facts about U.S. teachers and schools that put the education reform debate in context.” The article, in the wake of the Vergara decision, is supposed to provide readers with the “basic facts” about the current education reform environment, with a particular emphasis on teachers. Most of the 11 facts are based on descriptive statistics.

Vox advertises itself as a source of accessible, essential, summary information — what you “need to know” — for people interested in a topic but not necessarily well-versed in it. Right off the bat, let me say that this is an extraordinarily difficult task, and in constructing lists such as this one, there’s no way to please everyone (I’ve read a couple of Vox’s education articles and they were okay).

That said, someone sent me this particular list, and it’s pretty good overall, especially since it does not reflect overt advocacy for given policy positions, as so many of these types of lists do. But I was compelled to comment on it. I want to say that I did this to make some lofty point about the strengths and weaknesses of data and statistics packaged for consumption by the general public. It would, however, be more accurate to say that I started doing it and just couldn’t stop. In any case, here’s a little supplemental discussion of each of the 11 items: Read More »


Is Teacher Attrition Actually Increasing?

Posted by on June 12, 2014

Over the past few years, one can find a regular flow of writing attempting to explain the increase in teacher attrition. Usually, these explanations come in the form of advocacy – that is, people who don’t like a given policy or policies assert that they are the reasons for the rise in teachers leaving. Putting aside that these arguments are usually little more than speculation, as well as the fact that they often rely on highly limited approaches to measuring attrition (e.g., teacher experience distributions), there is a prior issue that must be addressed here: Is teacher attrition really increasing?

The short answer, at least at the national level and over the longer term, is yes, but, as usual, it’s more complicated than a simple yes/no answer.

Obviously, not all attrition is “bad,” as it depends on who’s leaving, but any attempt to examine levels of or trends in teacher attrition (leaving the profession) or mobility (switching schools) requires good data. When looking at individual districts, one often must rely on administrative datasets that make it very difficult to determine whether teachers left the profession entirely or simply moved to another district (though remember that whether teachers leave the profession or simply switch schools doesn’t really matter to individual schools, since they must replace the teachers regardless). In addition, the phenomenon of teachers leaving for a temporary period and then returning (e.g., after childbirth) is more common than many people realize. Read More »


The Proportionality Principle In Teacher Evaluations

Posted by on May 27, 2014

Our guest author today is Cory Koedel, Assistant Professor of Economics at the University of Missouri.

In a 2012 post on this blog, Dr. Di Carlo reviewed an article that I coauthored with colleagues Mark Ehlert, Eric Parsons and Michael Podgursky. The initial article (full version here, or for a shorter, less-technical version, see here) argues for the policy value of growth models that are designed to force comparisons to be between schools and teachers in observationally-similar circumstances.

The discussion is couched within the context of achieving three key policy objectives that we associate with the adoption of more-rigorous educational evaluation systems: (1) improving system-wide instruction by providing useful performance signals to schools and teachers; (2) eliciting optimal effort from school personnel; and (3) ensuring that current labor-market inequities between advantaged and disadvantaged schools are not exacerbated by the introduction of the new systems.

We argue that a model that forces comparisons to be between equally-circumstanced schools and teachers – which we describe as a “proportional” model – is best-suited to achieve these policy objectives. The conceptual appeal of the proportional approach is that it fully levels the playing field between high- and low-poverty schools. In contrast, some other growth models have been shown to produce estimates that are consistently associated with the characteristics of students being served (e.g., Student Growth Percentiles). Read More »


Matching Up Teacher Value-Added Between Different Tests

Posted by on February 11, 2014

The U.S. Department of Education has released a very short, readable report on the comparability of value-added estimates using two different tests in Indiana – one of them norm-referenced (the Measures of Academic Progress test, or MAP), and the other criterion-referenced (the Indiana Statewide Testing for Educational Progress Plus, or ISTEP+, which is also the state’s official test for NCLB purposes).

The research design here is straightforward – fourth and fifth grade students in 46 schools across 10 districts in Indiana took both tests, their teachers’ value-added scores were calculated, and the scores were compared. Since both sets of scores were based on the same students and teachers, this is allows a direct comparison of how teachers’ value-added estimates compare between these two tests. The results are not surprising, and they square with similar prior studies (see here, here, here, for example): The estimates based on the two tests are moderately correlated. Depending on the grade/subject, they are between 0.4 and 0.7. If you’re not used to interpreting correlation coefficients, consider that only around one-third of teachers were in the same quintile (fifth) on both tests, and another 40 or so percent were one quintile higher or lower. So, most teachers were within a quartile, about a quarter of teachers moved two or more quintiles, and a small percentage moved from top to bottom or vice-versa.

Although, as mentioned above, these findings are in line with prior research, it is worth remembering why this “instability” occurs (and what can be done about it). Read More »


Teacher Retention In An Era Of Rapid Reform

Posted by on February 7, 2014

The Center for American Progress (CAP) recently released a short report on whether teachers were leaving the profession due to reforms implemented during the Obama Administration, as some commentators predicted.

The authors use data from the Schools and Staffing Survey (SASS), a wonderful national survey of U.S. teachers, and they report that 70 percent of first-year teachers in 2007-08 were still teaching in 2011-12. They claim that this high retention of beginning teachers, along with the fact that most teachers in 2011-12 had five or more years of experience, show that “the teacher retention concerns were unfounded.”

This report raises a couple of important points about the debate over teacher retention during this time of sweeping reform.

Read More »


A Few Additional Points About The IMPACT Study

Posted by on December 4, 2013

The recently released study of IMPACT, the teacher evaluation system in the District of Columbia Public Schools (DCPS), has garnered a great deal of attention over the past couple of months (see our post here).

Much of the commentary from the system’s opponents was predictably (and unfairly) dismissive, but I’d like to quickly discuss the reaction from supporters. Some took the opportunity to make grand proclamations about how “IMPACT is working,” and there was a lot of back and forth about the need to ensure that various states’ evaluations are as “rigorous” as IMPACT (as well as skepticism as to whether this is the case).

The claim that this study shows that “IMPACT is working” is somewhat misleading, and the idea that states should now rush to replicate IMPACT is misguided. It also misses the important points about the study and what we can learn from its results. Read More »


The Wrong Way To Publish Teacher Prep Value-Added Scores

Posted by on November 14, 2013

As discussed in a prior post, the research on applying value-added to teacher prep programs is pretty much still in its infancy. Even just a couple of years of would go a long way toward at least partially addressing the many open questions in this area (including, by the way, the evidence suggesting that differences between programs may not be meaningfully large).

Nevertheless, a few states have decided to plow ahead and begin publishing value-added estimates for their teacher preparation programs. Tennessee, which seems to enjoy being first — their Race to the Top program is, a little ridiculously, called “First to the Top” — was ahead of the pack. They have once again published ratings for the few dozen teacher preparation programs that operate within the state. As mentioned in my post, if states are going to do this (and, as I said, my personal opinion is that it would be best to wait), it is absolutely essential that the data be presented along with thorough explanations of how to interpret and use them.

Tennessee fails to meet this standard.  Read More »


Getting Teacher Evaluation Right

Posted by on October 30, 2013

Linda Darling-Hammond’s new book, Getting Teacher Evaluation Right, is a detailed, practical guide about how to improve the teaching profession. It leverages the best research and best practices, offering actionable, illustrated steps to getting teacher evaluation right, with rich examples from the U.S. and abroad.

Here I offer a summary of the book’s main arguments and conclude with a couple of broad questions prompted by the book. But, before I delve into the details, here’s my quick take on Darling-Hammond’s overall stance.

We are at a crossroads in education; two paths lay before us. The first seems shorter, easier and more straightforward. The second seems long, winding and difficult. The big problem is that the first path does not really lead to where we need to go; in fact, it is taking us in the opposite direction. So, despite appearances, more steady progress will be made if we take the more difficult route. This book is a guide on how to get teacher evaluation right, not how to do it quickly or with minimal effort. So, in a way, the big message or take away is: There are no shortcuts. Read More »


Incentives And Behavior In DC’s Teacher Evaluation System

Posted by on October 17, 2013

A new working paper, published by the National Bureau of Economic Research, is the first high quality assessment of one of the new teacher evaluation systems sweeping across the nation. The study, by Thomas Dee and James Wyckoff, both highly respected economists, focuses on the first three years of IMPACT, the evaluation system put into place in the District of Columbia Public Schools in 2009.

Under IMPACT, each teacher receives a point total based on a combination of test-based and non-test-based measures (the formula varies between teachers who are and are not in tested grades/subjects). These point totals are then sorted into one of four categories – highly effective, effective, minimally effective and ineffective. Teachers who receive a highly effective (HE) rating are eligible for salary increases, whereas teachers rated ineffective are dismissed immediately and those receiving minimally effective (ME) for two consecutive years can also be terminated. The design of this study exploits that incentive structure by, put very simply, comparing the teachers who were directly above the ME and HE thresholds to those who were directly below them, and to see whether they differed in terms of retention and performance from those who were not. The basic idea is that these teachers are all very similar in terms of their measured performance, so any differences in outcomes can be (cautiously) attributed to the system’s incentives.

The short answer is that there were meaningful differences. Read More »


Disclaimer

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the shankerblog.org may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.

Banner image adapted from 1975 photograph by Jennie Shanker, daughter of Albert Shanker.