The Fatal Flaw Of Education Reform

Posted by on September 11, 2014

In the most simplistic portrayal of the education policy landscape, one of the “sides” is a group of people who are referred to as “reformers.” Though far from monolithic, these people tend to advocate for test-based accountability, charters/choice, overhauling teacher personnel rules, and other related policies, with a particular focus on high expectations, competition and measurement. They also frequently see themselves as in opposition to teachers’ unions.

Most of the “reformers” I have met and spoken with are not quite so easy to categorize. They are also thoughtful and open to dialogue, even when we disagree. And, at least in my experience, there is far more common ground than one might expect.

Nevertheless, I believe that this “movement” (to whatever degree you can characterize it in those terms) may be doomed to stall out in the long run, not because their ideas are all bad, and certainly not because they lack the political skills and resources to get their policies enacted. Rather, they risk failure for a simple reason: They too often make promises that they cannot keep. Read More »


The Great Teacher Evaluation Evaluation: New York Edition

Posted by on September 8, 2014

A couple of weeks ago, the New York State Education Department (NYSED) released data from the first year of the state’s new teacher and principal evaluation system (called the “Annual Professional Performance Review,” or APPR). In what has become a familiar pattern, this prompted a wave of criticism from advocates, much of it focused on the proportion of teachers in the state to receive the lowest ratings.

To be clear, evaluation systems that produce non-credible results should be examined and improved, and that includes those that put implausible proportions of teachers in the highest and lowest categories. Much of the commentary surrounding this and other issues has been thoughtful and measured. As usual, though, there have been some oversimplified reactions, as exemplified by this piece on the APPR results from Students First NY (SFNY).

SFNY notes what it considers to be the low proportion of teachers rated “ineffective,” and points out that there was more differentiation across rating categories for the state growth measure (worth 20 percent of teachers’ final scores), compared with the local “student learning” measure (20 percent) and the classroom observation components (60 percent). Based on this, they conclude that New York’s “state test is the only reliable measure of teacher performance” (they are actually talking about validity, not reliability, but we’ll let that go). Again, this argument is not representative of the commentary surrounding the APPR results, but let’s use it as a springboard for making a few points, most of which are not particularly original. (UPDATE: After publication of this post, SFNY changed the headline of their piece from “the only reliable measure of teacher performance” to “the most reliable measure of teacher performance.”) Read More »


Why Teachers And Researchers Should Work Together For Improvement

Posted by on September 4, 2014

Our guest author today is Bill Penuel, professor of educational psychology and learning sciences at the University of Colorado Boulder. He leads the National Center for Research in Policy and Practice, which investigates how school and district leaders use research in decision-making. Bill is co-Principal Investigator of the Research+Practice Collaboratory (funded by the National Science Foundation) and of a study about research use in research-practice partnerships (supported by the William T. Grant Foundation). This is the first of two posts on research-practice partnerships; both are part of The Social Side of Reform Shanker Blog series.

Policymakers are asking a lot of public school teachers these days, especially when it comes to the shifts in teaching and assessment required to implement new, ambitious standards for student learning. Teachers want and need more time and support to make these shifts. A big question is: What kinds of support and guidance can educational research and researchers provide?

Unfortunately, that question is not easy to answer. Most educational researchers spend much of their time answering questions that are of more interest to other researchers than to practitioners.  Even if researchers did focus on questions of interest to practitioners, teachers and teacher leaders need answers more quickly than researchers can provide them. And when researchers and practitioners do try to work together on problems of practice, it takes a while for them to get on the same page about what those problems are and how to solve them. It’s almost as if researchers and practitioners occupy two different cultural worlds. Read More »


No Teacher Is An Island: The Role Of Social Relations In Teacher Evaluation

Posted by on August 19, 2014

Our guest authors today are Alan J. Daly, Professor and Chair of Education Studies at the University of California San Diego, and Kara S. Finnigan, Associate Professor at the Warner School of Education at the University of Rochester. Daly and Finnigan have published numerous articles on social network analysis in education and recently co-edited Using Research Evidence in Education: From the Schoolhouse Door to Capitol Hill (Springer, 2014), which explores the use and diffusion of different types of evidence across levels of the educational system.

Teacher evaluation is a hotly contested topic, with vigorous debate happening around issues of testing, measurement, and what is considered ‘important’ in terms of student learning, not to mention the potential high stakes decisions that may be made as a result of these assessments.  At its best, this discussion has reinvigorated a national dialogue around teaching practice and research; at its worst it has polarized and entrenched stakeholder groups into rigid camps. How is it we can avoid the calcification of opinion and continue a constructive dialogue around this important and complex issue?

One way, as we suggest here, is to continue to discuss alternatives around teacher evaluation, and to be thoughtful about the role of social interactions in student outcomes, particularly as it relates to the current conversation around valued added models. It is in this spirit that we ask: Is there a ‘social side’ to a teacher’s ability to add value to their students’ growth and, if so, what are the implications for current teacher evaluation models? Read More »


Social Capital Matters As Much As Human Capital – A Message To Skeptics

Posted by on August 4, 2014

This is the third post in a series on “The Social Side Of Reform”, exploring the idea that relationships, social capital, and social networks matter in lasting, systemic educational improvement. For more on this series, click here.

In recent posts (here and here), we have been arguing that social capital — social relations and the resources that can be accessed through them (e.g., support, knowledge) — is an enormously important component of educational improvement. In fact, I have suggested that understanding and promoting social capital in schools may be as promising as focusing on personnel (or human capital) policies such as teacher evaluation, compensation and so on. 

My sense is that many teachers and principals support this argument, but I am also very interested in making the case to those who may disagree. I doubt very many people would disagree with the idea that relationships matter, but perhaps there are more than a few skeptics when it comes to how much they matter, and especially to whether or not social capital can be as powerful and practical a policy lever as human capital.

In other words, there are, most likely, those who view social capital as something that cannot really be leveraged cost-effectively with policy intervention toward any significant impact, in no small part because it focuses on promoting things that already happen and/or that cannot be mandated. For example, teachers already spend time together and cannot/should not be required to do so more often, at least not to an extent that would make a difference for student outcomes (although this could be said of almost any policy). Read More »


The Importance Of Relationships In Educational Reform

Posted by on July 7, 2014

* Reprinted here in the Washington Post

This is the first post in a series on “The Social Side Of Reform”, exploring the idea that relationships, social capital, and social networks matter in lasting, systemic educational improvement. For more on this series, click here.

Our guest authors today are Kara S. Finnigan, Associate Professor at the Warner School of Education at the University of Rochester, and Alan J. Daly, Professor and Chair of Education Studies at the University of California San Diego. Finnigan and Daly have published numerous articles on social network analysis in education in academic and practitioner journals, and recently co-edited Using Research Evidence in Education: From the Schoolhouse Door to Capitol Hill (Springer, 2014), which explores the use and diffusion of different types of evidence across levels of the educational system.

There are many reforms out there; what if these ideas are not working as well as they could because educators are simply not communicating or building meaningful relationships with each other or maybe the conditions in which they do their work do not support productive interactions?  These are important issues to understand and our research, some of which we highlight in this post, underscores the importance of the relational element in reform.  To further explore the social side of the change equation, we draw on social network research as a way to highlight the importance of relationships as conduits through which valued resources flow and can bring about system-wide change.

A few years ago Arne Duncan noted that “[NCLB] has created a thousand ways for schools to fail and very few ways to help them succeed.”  We think that may have to do with the over reliance on technical fixes, prescriptive approaches and the scant attention to the context — particularly the social context — in which reforms are implemented.  But what would things look like if we took a more relational approach to educational improvement? Read More »


We Can’t Just Raise Expectations

Posted by on April 30, 2014

* Reprinted here in the Washington Post

What exactly is “a culture of high expectations” and how is it created? In fact, what are expectations? I ask these questions because I hear this catchphrase a lot, but it doesn’t seem like the real barriers to developing such a culture are well understood. If we are serious about raising expectations for all learners, we need to think seriously about what expectations are, how they work and what it might take to create environments that equalize high expectations for what students can achieve.

In this post I explain why I think the idea of “raising expectations” — when used carelessly and as a slogan — is meaningless. Expectations are not test-scores. They are related to standards but are not the same thing. Expectations are a complex and unobservable construct — succinctly, they are unconscious anticipations of performance. Changing expectations for competence is not easy, but it is possible — I get at some of that later.

Certain conditions, however, need to be in place — e.g., a broad conceptualization of ability, a cooperative environment etc. It is unclear that these conditions are present in many of our schools. In fact, many are worried that the opposite is happening. The research and theory I examine here suggest that extreme standardization and competition are incompatible with equalizing expectations in the classroom. They suggest, rather, that current reforms might be making it more difficult to develop and sustain high expectations for all students, and to create classrooms where all students experience similar opportunities to learn. Read More »


The Middle Ground Between Opt Out And All In

Posted by on April 11, 2014

A couple of weeks ago, Michelle Rhee published an op-ed in the Washington Post speaking out against the so-called “opt out movement,” which encourages parents to refuse to let their children take standardized tests.

Personally, I oppose the “opt-out” phenomenon, but I also think it would be a mistake not to pay attention to its proponents’ fundamental issue – that standardized tests are potentially being misused and/or overused. This concern is legitimate and important. My sense is that “opting out” reflects a rather extreme version of this mindset, a belief that we cannot right the ship – i.e., we have gone so far and moved so carelessly with test-based accountability that there is no real hope that it can or will be fixed. This strikes me as a severe overreaction, but I understand the sentiment.

That said, while most of Ms. Rhee’s op-ed is the standard, reasonable fare, some of it is also laced with precisely the kind of misconceptions that contribute to the apprehensions not only of anti-testing advocates, but also among those of us who occupy a middle ground – i.e., favor some test-based accountability, but are worried about getting it right. Read More »


SIG And The High Price Of Cheap Evidence

Posted by on March 11, 2014

A few months ago, the U.S. Department of Education (USED) released the latest data from schools that received grants via the School Improvement (SIG) program. These data — consisting solely of changes in proficiency rates — were widely reported as an indication of “disappointing” or “mixed” results. Some even went as far as proclaiming the program a complete failure.

Once again, I have to point out that this breaks almost every rule of testing data interpretation and policy analysis. I’m not going to repeat the arguments about why changes in cross-sectional proficiency rates are not policy evidence (see our posts here, here and here, or examples from the research literature here, here and here). Suffice it to say that the changes themselves are not even particularly good indicators of whether students’ test-based performance in these schools actually improved, to say nothing of whether it was the SIG grants that were responsible for the changes. There’s more to policy analysis than subtraction.

So, in some respects, I would like to come to the defense of Secretary Arne Duncan and USED right now – not because I’m a big fan of the SIG program (I’m ambivalent at best), but rather because I believe in strong, patient policy evaluation, and these proficiency rate changes are virtually meaningless. Unfortunately, however, USED was the first to portray, albeit very cautiously, rate changes as evidence of SIG’s impact. In doing so, they provided a very effective example of why relying on bad evidence is a bad idea even if it supports your desired conclusions. Read More »


In Education Policy, Good Things Come In Small Packages

Posted by on March 7, 2014

A recent report from the U.S. Department of Education presented a summary of three recent studies of the differences in the effectiveness of teaching provided advantaged and disadvantaged students (with the former defined in terms of value-added scores, and the latter in terms of subsidized lunch eligibility). The brief characterizes the results of these reports in an accessible manner – that the difference in estimated teaching effectiveness between advantaged and disadvantaged students varied quite widely between districts, but overall is about four percent of the achievement gap in reading and 2-3 percent in math.

Some observers were not impressed. They wondered why so-called reformers are alienating teachers and hurting students in order to address a mere 2-4 percent improvement in the achievement gap.

Just to be clear, the 2-4 percent figures describe the gap (and remember that it varies). Whether it can be narrowed or closed – e.g., by improving working conditions or offering incentives or some other means – is a separate issue. Nevertheless, let’s put aside all the substantive aspects surrounding these studies, and the issue of the distribution of teacher quality, and discuss this 2-4 percent thing, as it illustrates what I believe is the among the most important tensions underlying education policy today: Our collective failure to have a reasonable debate about expectations and the power of education policy. Read More »


Revisiting The Widget Effect

Posted by on March 4, 2014

In 2009, The New Teacher Project (TNTP) released a report called “The Widget Effect.” You would be hard-pressed to find too many more recent publications from an advocacy group that had a larger influence on education policy and the debate surrounding it. To this day, the report is mentioned regularly by advocates and policy makers.

The primary argument of the report was that teacher performance “is not measured, recorded, or used to inform decision making in any meaningful way.” More specifically, the report shows that most teachers received “satisfactory” or equivalent ratings, and that evaluations were not tied to most personnel decisions (e.g., compensation, layoffs, etc.). From these findings and arguments comes the catchy title – a “widget” is a fictional product commonly used in situations (e.g., economics classes) where the product doesn’t matter. Thus, treating teachers like widgets means that we treat them all as if they’re the same.

Given the influence of “The Widget Effect,” as well as how different the teacher evaluation landscape is now compared to when it was released, I decided to read it closely. Having done so, I think it’s worth discussing a few points about the report. Read More »


Teacher Retention In An Era Of Rapid Reform

Posted by on February 7, 2014

The Center for American Progress (CAP) recently released a short report on whether teachers were leaving the profession due to reforms implemented during the Obama Administration, as some commentators predicted.

The authors use data from the Schools and Staffing Survey (SASS), a wonderful national survey of U.S. teachers, and they report that 70 percent of first-year teachers in 2007-08 were still teaching in 2011-12. They claim that this high retention of beginning teachers, along with the fact that most teachers in 2011-12 had five or more years of experience, show that “the teacher retention concerns were unfounded.”

This report raises a couple of important points about the debate over teacher retention during this time of sweeping reform.

Read More »


The Year In Research On Market-Based Education Reform: 2013 Edition

Posted by on December 17, 2013

In the three most discussed and controversial areas of market-based education reform – performance pay, charter schools and the use of value-added estimates in teacher evaluations – 2013 saw the release of a couple of truly landmark reports, in addition to the normal flow of strong work coming from the education research community (see our reviews from 2010, 2011 and 2012).*

In one sense, this building body of evidence is critical and even comforting, given not only the rapid expansion of charter schools, but also and especially the ongoing design and implementation of new teacher evaluations (which, in many cases, include performance-based pay incentives). In another sense, however, there is good cause for anxiety. Although one must try policies before knowing how they work, the sheer speed of policy change in the U.S. right now means that policymakers are making important decisions on the fly, and there is great deal of uncertainty as to how this will all turn out.

Moreover, while 2013 was without question an important year for research in these three areas, it also illustrated an obvious point: Proper interpretation and application of findings is perhaps just as important as the work itself. Read More »


Immediate Gratification And Education Policy

Posted by on December 9, 2013

A couple of months ago, Bill Gates said something that received a lot of attention. With regard to his foundation’s education reform efforts, which focus most prominently on teacher evaluations, but encompass many other areas, he noted, “we don’t know if it will work.” In fact, according to Mr. Gates, “we won’t know for probably a decade.”

He’s absolutely correct. Most education policies, including (but not limited to) those geared toward shifting the distribution of teacher quality, take a long time to work (if they do work), and the research assessing these policies requires a great deal of patience. Yet so many of the most prominent figures in education policy routinely espouse the opposite viewpoint: Policies are expected to have an immediate, measurable impact (and their effects are assessed in the crudest manner imaginable).

A perfect example was the reaction to the recent release of results of the National Assessment of Educational Progress (NAEP). Read More »


A Research-Based Case For Florida’s Education Reforms

Posted by on September 26, 2013

Advocates of the so-called “Florida Formula,” a package of market-based reforms enacted throughout the 1990s and 2000s, some of which are now spreading rapidly in other states, traveled to Michigan this week to make their case to the state’s lawmakers, with particular emphasis on Florida’s school grading system. In addition to arguments about accessibility and parental involvement, their empirical (i.e., test-based) evidence consisted largely of the standard, invalid claims that cross-sectional NAEP increases prove the reforms’ effectiveness, along with a bonus appearance of the argument that since Florida starting grading schools, the grades have improved, even though this is largely (and demonstrably) a result of changes in the formula.

As mentioned in a previous post, I continue to be perplexed at advocates’ insistence on using this “evidence,” even though there is a decent amount of actual rigorous policy research available, much of it positive.

So, I thought it would be fun, though slightly strange, for me to try on my market-based reformer cap, and see what it would look like if this kind of testimony about the Florida reforms was actually research-based (at least the test-based evidence). Here’s a very rough outline of what I came up with: Read More »


Disclaimer

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the shankerblog.org may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.

Banner image adapted from 1975 photograph by Jennie Shanker, daughter of Albert Shanker.