Multiple Measures And Singular Conclusions In A Twin City

Posted by on November 12, 2014

A few weeks ago, the Minneapolis Star Tribune published teacher evaluation results for the district’s public school teachers in 2013-14. This decision generated a fair amount of controversy, but it’s worth noting that the Tribune, unlike the Los Angeles Times and New York City newspapers a few years ago, did not publish scores for individual teachers, only totals by school.

The data once again provide an opportunity to take a look at how results vary by student characteristics. This was indeed the focus of the Tribune’s story, which included the following headline: “Minneapolis’ worst teachers are in the poorest schools, data show.” These types of conclusions, which simply take the results of new evaluations at face value, have characterized the discussion since the first new systems came online. Though understandable, they are also frustrating and a potential impediment to the policy process. At this early point, “the city’s teachers with the lowest evaluation ratings” is not the same thing as “the city’s worst teachers.” Actually, as discussed in a previous post, the systematic variation in evaluation results by student characteristics, which the Tribune uses to draw conclusions about the distribution of the city’s “worst teachers,” could just as easily be viewed as one of the many ways that one might assess the properties and even the validity of those results.

So, while there are no clear-cut “right” or “wrong” answers here, let’s take a quick look at the data and what they might tell us. Read More »


A New Focus On Social Capital In School Reform Efforts

Posted by on October 14, 2014

** Reprinted here in the Washington Post

Our guest authors today are Carrie R. Leana, George H. Love Professor of Organizations and Management, Professor of Business Administration, Medicine, and Public and International Affairs, and Director of the Center for Health and Care Work, at the University of Pittsburgh, and Frits K. Pil, Professor of Business Administration at the Katz Graduate School of Business and research scientist at the Learning Research and Development Center, at the University of Pittsburgh. This column is part of The Social Side of Reform Shanker Blog series.

Most current models of school reform focus on teacher accountability for student performance measured via standardized tests, “improved” curricula, and what economists label “human capital” – e.g., factors such as teacher experience, subject knowledge and pedagogical skills. But our research over many years in several large school districts suggests that if students are to show real and sustained learning, schools must also foster what sociologists label “social capital” – the value embedded in relations among teachers, and between teachers and school administrators. Social capital is the glue that holds a school together. It complements teacher skill, it enhances teachers’ individual classroom efforts, and it enables collective commitment to bring about school-wide change.

We are professors at a leading Business School who have conducted research in a broad array of settings, ranging from steel mills and auto plants to insurance offices, banks, and even nursing homes. We examine how formal and informal work practices enhance organizational learning and performance. What we have found over and over again is that, regardless of context, organizational success rarely stems from the latest technology or a few exemplary individuals. Read More »


The Great Teacher Evaluation Evaluation: New York Edition

Posted by on September 8, 2014

A couple of weeks ago, the New York State Education Department (NYSED) released data from the first year of the state’s new teacher and principal evaluation system (called the “Annual Professional Performance Review,” or APPR). In what has become a familiar pattern, this prompted a wave of criticism from advocates, much of it focused on the proportion of teachers in the state to receive the lowest ratings.

To be clear, evaluation systems that produce non-credible results should be examined and improved, and that includes those that put implausible proportions of teachers in the highest and lowest categories. Much of the commentary surrounding this and other issues has been thoughtful and measured. As usual, though, there have been some oversimplified reactions, as exemplified by this piece on the APPR results from Students First NY (SFNY).

SFNY notes what it considers to be the low proportion of teachers rated “ineffective,” and points out that there was more differentiation across rating categories for the state growth measure (worth 20 percent of teachers’ final scores), compared with the local “student learning” measure (20 percent) and the classroom observation components (60 percent). Based on this, they conclude that New York’s “state test is the only reliable measure of teacher performance” (they are actually talking about validity, not reliability, but we’ll let that go). Again, this argument is not representative of the commentary surrounding the APPR results, but let’s use it as a springboard for making a few points, most of which are not particularly original. (UPDATE: After publication of this post, SFNY changed the headline of their piece from “the only reliable measure of teacher performance” to “the most reliable measure of teacher performance.”) Read More »


Research And Policy On Paying Teachers For Advanced Degrees

Posted by on September 2, 2014

There are three general factors that determine most public school teachers’ base salaries (which are usually laid out in a table called a salary schedule). The first is where they teach; districts vary widely in how much they pay. The second factor is experience. Salary schedules normally grant teachers “step raises” or “increments” each year they remain in the district, though these raises end at some point (when teachers reach the “top step”).

The third typical factor that determines teacher salary is their level of education. Usually, teachers receive a permanent raise for acquiring additional education beyond their bachelor’s degree. Most commonly, this means a master’s degree, which roughly half of teachers have earned (though most districts award raises for accumulating a certain number of credits towards a master’s and/or a Ph.D., and for getting a Ph.D.). The raise for receiving a master’s degree varies, but just to give an idea, it is, on average, about 10 percent over the base salary of bachelor’s-only teachers.

This practice of awarding raises for teachers who earn master’s degrees has come under tremendous fire in recent years. The basic argument is that these raises are expensive, but that having a master’s degree is not associated with test-based effectiveness (i.e., is not correlated with scores from value-added models of teachers’ estimated impact on their students’ testing performance). Many advocates argue that states and districts should simply cease giving teachers raises for advanced degrees, since, they say, it makes no sense to pay teachers for a credential that is not associated with higher performance. North Carolina, in fact, passed a law last year ending these raises, and there is talk of doing the same elsewhere. Read More »


A Quick Look At The ASA Statement On Value-Added

Posted by on August 26, 2014

Several months ago, the American Statistical Association (ASA) released a statement on the use of value-added models in education policy. I’m a little late getting to this (and might be repeating points that others made at the time), but I wanted to comment on the statement, not only because I think it’s useful to have ASA add their perspective to the debate on this issue, but also because their statement seems to have become one of the staple citations for those who oppose the use of these models in teacher evaluations and other policies.

Some of these folks claimed that the ASA supported their viewpoint – i.e., that value-added models should play no role in accountability policy. I don’t agree with this interpretation. To be sure, the ASA authors described the limitations of these estimates, and urged caution, but I think that the statement rather explicitly reaches a more nuanced conclusion: That value-added estimates might play a useful role in education policy, as one among several measures used in formal accountability systems, but this must be done carefully and appropriately.*

Much of the statement puts forth the standard, albeit important, points about value-added (e.g., moderate stability between years/models, potential for bias, etc.). But there are, from my reading, three important takeaways that bear on the public debate about the use of these measures, which are not always so widely acknowledged. Read More »


No Teacher Is An Island: The Role Of Social Relations In Teacher Evaluation

Posted by on August 19, 2014

Our guest authors today are Alan J. Daly, Professor and Chair of Education Studies at the University of California San Diego, and Kara S. Finnigan, Associate Professor at the Warner School of Education at the University of Rochester. Daly and Finnigan have published numerous articles on social network analysis in education and recently co-edited Using Research Evidence in Education: From the Schoolhouse Door to Capitol Hill (Springer, 2014), which explores the use and diffusion of different types of evidence across levels of the educational system.

Teacher evaluation is a hotly contested topic, with vigorous debate happening around issues of testing, measurement, and what is considered ‘important’ in terms of student learning, not to mention the potential high stakes decisions that may be made as a result of these assessments.  At its best, this discussion has reinvigorated a national dialogue around teaching practice and research; at its worst it has polarized and entrenched stakeholder groups into rigid camps. How is it we can avoid the calcification of opinion and continue a constructive dialogue around this important and complex issue?

One way, as we suggest here, is to continue to discuss alternatives around teacher evaluation, and to be thoughtful about the role of social interactions in student outcomes, particularly as it relates to the current conversation around valued added models. It is in this spirit that we ask: Is there a ‘social side’ to a teacher’s ability to add value to their students’ growth and, if so, what are the implications for current teacher evaluation models? Read More »


Differences In DC Teacher Evaluation Ratings By School Poverty

Posted by on August 12, 2014

In a previous post, I discussed simple data from the District of Columbia Public Schools (DCPS) on teacher turnover in high- versus lower-poverty schools. In that same report, which was issued by the D.C. Auditor and included, among other things, descriptive analyses by the excellent researchers from Mathematica, there is another very interesting table showing the evaluation ratings of DC teachers in 2010-11 by school poverty (and, indeed, DC officials deserve credit for making these kinds of data available to the public, as this is not the case in many other states).

DCPS’ well-known evaluation system (called IMPACT) varies between teachers in tested versus non-tested grades, but the final ratings are a weighted average of several components, including: the teaching and learning framework (classroom observations); commitment to the school community (attendance at meetings, mentoring, PD, etc.); schoolwide value-added; teacher-assessed student achievement data (local assessments); core professionalism (absences, etc.); and individual value-added (tested teachers only).

The table I want to discuss is on page 43 of the Auditor’s report, and it shows average IMPACT scores for each component and overall for teachers in high-poverty schools (80-100 percent free/reduced-price lunch), medium poverty schools (60-80 percent) and low-poverty schools (less than 60 percent). It is pasted below. Read More »


Lost In Citation

Posted by on July 31, 2014

The so-called Vergara trial in California, in which the state’s tenure and layoff statutes were deemed unconstitutional, already has its first “spin-off,” this time in New York, where a newly-formed organization, the Partnership for Educational Justice (PEJ), is among the organizations and entities spearheading the effort.

Upon first visiting PEJ’s new website, I was immediately (and predictably) drawn to the “Research” tab. It contains five statements (which, I guess, PEJ would characterize as “facts”). Each argument is presented in the most accessible form possible, typically accompanied by one citation (or two at most). I assume that the presentation of evidence in the actual trial will be a lot more thorough than that offered on this webpage, which seems geared toward the public rather than the more extensive evidentiary requirements of the courtroom (also see Bruce Baker’s comments on many of these same issues surrounding the New York situation).

That said, I thought it might be useful to review the basic arguments and evidence PEJ presents, not really in the context of whether they will “work” in the lawsuit (a judgment I am unqualified to make), but rather because they’re very common, and also because it’s been my observation that advocates, on both “sides” of the education debate, tend to be fairly good at using data and research to describe problems and/or situations, yet sometimes fall a bit short when it comes to evidence-based discussions of what to do about them (including the essential task of acknowledging when the evidence is still undeveloped). PEJ’s five bullet points, discussed below, are pretty good examples of what I mean. Read More »


Do Students Learn More When Their Teachers Work Together?

Posted by on July 17, 2014

** Reprinted here in the Washington Post

This is the second post in a series on “The Social Side Of Reform”, exploring the idea that relationships, social capital, and social networks matter in lasting, systemic educational improvement. For more on this series, click here.

Debates about how to improve educational outcomes for students often involve two ‘camps’: Those who focus on the impact of “in-school factors” on student achievement; and those who focus on “out-of-school factors.” There are many in-school factors discussed but improving the quality of individual teachers (or teachers’ human capital) is almost always touted as the main strategy for school improvement. Out-of-school factors are also numerous but proponents of this view tend toward addressing broad systemic problems such as poverty and inequality.

Social capital — the idea that relationships have value, that social ties provide access to important resources like knowledge and support, and that a group’s performance can often exceed that of the sum of its members — is something that rarely makes it into the conversation. But why does social capital matter?

Research suggests that teachers’ social capital may be just as important to student learning as their human capital. In fact, some studies indicate that if school improvement policies addressed teachers’ human and social capital simultaneously, they would go a long way toward mitigating the effects of poverty on student outcomes. Sounds good, right? The problem is: Current policy does not resemble this approach. Researchers, commentators and practitioners have shown and lamented that many of the strategies leveraged to increase teachers’ human capital often do so at the expense of eroding social capital in our schools. In other words, these approaches are moving us one step forward and two steps back. Read More »


The Language Of Teacher Effectiveness

Posted by on July 10, 2014

There is a tendency in education circles these days, one that I’m sure has been discussed by others, and of which I myself have been “guilty,” on countless occasions. The tendency is to use terms such “effective/ineffective teacher” or “teacher performance” interchangeably with estimates from value-added and other growth models.

Now, to be clear, I personally am not opposed to the use of value-added estimates in teacher evaluations and other policies, so long as it is done cautiously and appropriately (which, in my view, is not happening in very many places). Moreover, based on my reading of the research, I believe that these estimates can provide useful information about teachers’ performance in the classroom. In short, then, I am not disputing whether value-added scores should be considered to be one useful proxy measure for teacher performance and effectiveness (and described as such), both formally and informally.

Regardless of one’s views on value-added and its policy deployment, however, there is a point at which our failure to define terms can go too far, and perhaps cause confusion. Read More »


Teachers And Education Reform, On A Need To Know Basis

Posted by on July 1, 2014

A couple of weeks ago, the website Vox.com published an article entitled, “11 facts about U.S. teachers and schools that put the education reform debate in context.” The article, in the wake of the Vergara decision, is supposed to provide readers with the “basic facts” about the current education reform environment, with a particular emphasis on teachers. Most of the 11 facts are based on descriptive statistics.

Vox advertises itself as a source of accessible, essential, summary information — what you “need to know” — for people interested in a topic but not necessarily well-versed in it. Right off the bat, let me say that this is an extraordinarily difficult task, and in constructing lists such as this one, there’s no way to please everyone (I’ve read a couple of Vox’s education articles and they were okay).

That said, someone sent me this particular list, and it’s pretty good overall, especially since it does not reflect overt advocacy for given policy positions, as so many of these types of lists do. But I was compelled to comment on it. I want to say that I did this to make some lofty point about the strengths and weaknesses of data and statistics packaged for consumption by the general public. It would, however, be more accurate to say that I started doing it and just couldn’t stop. In any case, here’s a little supplemental discussion of each of the 11 items: Read More »


Is Teacher Attrition Actually Increasing?

Posted by on June 12, 2014

Over the past few years, one can find a regular flow of writing attempting to explain the increase in teacher attrition. Usually, these explanations come in the form of advocacy – that is, people who don’t like a given policy or policies assert that they are the reasons for the rise in teachers leaving. Putting aside that these arguments are usually little more than speculation, as well as the fact that they often rely on highly limited approaches to measuring attrition (e.g., teacher experience distributions), there is a prior issue that must be addressed here: Is teacher attrition really increasing?

The short answer, at least at the national level and over the longer term, is yes, but, as usual, it’s more complicated than a simple yes/no answer.

Obviously, not all attrition is “bad,” as it depends on who’s leaving, but any attempt to examine levels of or trends in teacher attrition (leaving the profession) or mobility (switching schools) requires good data. When looking at individual districts, one often must rely on administrative datasets that make it very difficult to determine whether teachers left the profession entirely or simply moved to another district (though remember that whether teachers leave the profession or simply switch schools doesn’t really matter to individual schools, since they must replace the teachers regardless). In addition, the phenomenon of teachers leaving for a temporary period and then returning (e.g., after childbirth) is more common than many people realize. Read More »


The Proportionality Principle In Teacher Evaluations

Posted by on May 27, 2014

Our guest author today is Cory Koedel, Assistant Professor of Economics at the University of Missouri.

In a 2012 post on this blog, Dr. Di Carlo reviewed an article that I coauthored with colleagues Mark Ehlert, Eric Parsons and Michael Podgursky. The initial article (full version here, or for a shorter, less-technical version, see here) argues for the policy value of growth models that are designed to force comparisons to be between schools and teachers in observationally-similar circumstances.

The discussion is couched within the context of achieving three key policy objectives that we associate with the adoption of more-rigorous educational evaluation systems: (1) improving system-wide instruction by providing useful performance signals to schools and teachers; (2) eliciting optimal effort from school personnel; and (3) ensuring that current labor-market inequities between advantaged and disadvantaged schools are not exacerbated by the introduction of the new systems.

We argue that a model that forces comparisons to be between equally-circumstanced schools and teachers – which we describe as a “proportional” model – is best-suited to achieve these policy objectives. The conceptual appeal of the proportional approach is that it fully levels the playing field between high- and low-poverty schools. In contrast, some other growth models have been shown to produce estimates that are consistently associated with the characteristics of students being served (e.g., Student Growth Percentiles). Read More »


Matching Up Teacher Value-Added Between Different Tests

Posted by on February 11, 2014

The U.S. Department of Education has released a very short, readable report on the comparability of value-added estimates using two different tests in Indiana – one of them norm-referenced (the Measures of Academic Progress test, or MAP), and the other criterion-referenced (the Indiana Statewide Testing for Educational Progress Plus, or ISTEP+, which is also the state’s official test for NCLB purposes).

The research design here is straightforward – fourth and fifth grade students in 46 schools across 10 districts in Indiana took both tests, their teachers’ value-added scores were calculated, and the scores were compared. Since both sets of scores were based on the same students and teachers, this is allows a direct comparison of how teachers’ value-added estimates compare between these two tests. The results are not surprising, and they square with similar prior studies (see here, here, here, for example): The estimates based on the two tests are moderately correlated. Depending on the grade/subject, they are between 0.4 and 0.7. If you’re not used to interpreting correlation coefficients, consider that only around one-third of teachers were in the same quintile (fifth) on both tests, and another 40 or so percent were one quintile higher or lower. So, most teachers were within a quartile, about a quarter of teachers moved two or more quintiles, and a small percentage moved from top to bottom or vice-versa.

Although, as mentioned above, these findings are in line with prior research, it is worth remembering why this “instability” occurs (and what can be done about it). Read More »


Teacher Retention In An Era Of Rapid Reform

Posted by on February 7, 2014

The Center for American Progress (CAP) recently released a short report on whether teachers were leaving the profession due to reforms implemented during the Obama Administration, as some commentators predicted.

The authors use data from the Schools and Staffing Survey (SASS), a wonderful national survey of U.S. teachers, and they report that 70 percent of first-year teachers in 2007-08 were still teaching in 2011-12. They claim that this high retention of beginning teachers, along with the fact that most teachers in 2011-12 had five or more years of experience, show that “the teacher retention concerns were unfounded.”

This report raises a couple of important points about the debate over teacher retention during this time of sweeping reform.

Read More »


Disclaimer

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the shankerblog.org may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.

Banner image adapted from 1975 photograph by Jennie Shanker, daughter of Albert Shanker.