A new Mathematica report examines the test-based impact of The Equity Project (TEP), a New York City charter school serving grades 5-8. TEP opened up for the 2009-10 school year, receiving national attention mostly due to one unusual policy: They paid teachers $125,000 per year, regardless of experience and education, in addition to annual bonuses (up to $25,000) for returning teachers. TEP largely makes up for these unusually high salary costs by minimizing the number of administrators and maintaining larger class sizes.
As is typical of Mathematica, the TEP analysis is thorough and well-done. The school’s students’ performance is compared to that of similar peers with a comparable probability of enrolling in TEP, as identified with propensity scores. In general, the study’s results were quite positive. Although there were statistically discernible negative impacts of attendance for TEP’s first cohort of students during their first two years, the cumulative estimated test-based impact was significant, positive and educationally meaningful after three and four years of attendance. As always, the estimated effect was stronger in math than in reading (estimated effect sizes for the former were very large in magnitude). The Mathematica researchers also present analyses on student attrition, which did not appear to bias the estimates substantially, and they also show that their primary results are robust when using alternative specifications (e.g., different matching techniques, score transformations, etc.).
Now we get to the tricky questions about these results: What caused them and what can be learned as a result? That’s the big issue with charter analyses in general (and with research on many other interventions): One can almost never separate the “why” from the “what” with any degree of confidence. And TEP, with its “flagship policy” of high teacher salaries, which might appeal to all “sides” in the education policy debate, provides an interesting example in this respect. Read More »
Our guest authors today are Kara S. Finnigan, Associate Professor at the Warner School of Education at the University of Rochester, and Alan J. Daly, Professor and Chair of Education Studies at the University of California San Diego. Finnigan and Daly have published numerous articles on social network analysis, and recently co-edited Using Research Evidence in Education: From the Schoolhouse Door to Capitol Hill (Springer, 2014), which explores the use and diffusion of different types of evidence across levels of the educational system.
“All the world’s a stage,
And all the men and women merely players;
They have their exits and their entrances.”
Shakespeare – As You Like It
All too often in districts under intense accountability pressures, exits and entrances happen frequently and repeatedly. One might conceptualize the work of district reform as a play in which actors are beginning to learn their lines and block places on the stage but, just as the play is underway, some key actors leave and others join, causing disruption to the performance. Now, if all of those who leave or join have smaller roles, the disruption may be less extreme, but if most are lead actors or the director or even the head of costume design, you’d likely have to push back opening night. Read More »
Our guest authors today are Carrie R. Leana, George H. Love Professor of Organizations and Management, Professor of Business Administration, Medicine, and Public and International Affairs, and Director of the Center for Health and Care Work, at the University of Pittsburgh, and Frits K. Pil, Professor of Business Administration at the Katz Graduate School of Business and research scientist at the Learning Research and Development Center, at the University of Pittsburgh. This column is part of The Social Side of Reform Shanker Blog series.
Most current models of school reform focus on teacher accountability for student performance measured via standardized tests, “improved” curricula, and what economists label “human capital” – e.g., factors such as teacher experience, subject knowledge and pedagogical skills. But our research over many years in several large school districts suggests that if students are to show real and sustained learning, schools must also foster what sociologists label “social capital” – the value embedded in relations among teachers, and between teachers and school administrators. Social capital is the glue that holds a school together. It complements teacher skill, it enhances teachers’ individual classroom efforts, and it enables collective commitment to bring about school-wide change.
We are professors at a leading Business School who have conducted research in a broad array of settings, ranging from steel mills and auto plants to insurance offices, banks, and even nursing homes. We examine how formal and informal work practices enhance organizational learning and performance. What we have found over and over again is that, regardless of context, organizational success rarely stems from the latest technology or a few exemplary individuals. Read More »
Our guest author today is Bill Penuel, professor of educational psychology and learning sciences at the University of Colorado Boulder. He leads the National Center for Research in Policy and Practice, which investigates how school and district leaders use research in decision-making. Bill is co-Principal Investigator of the Research+Practice Collaboratory (funded by the National Science Foundation) and of a study about research use in research-practice partnerships (supported by the William T. Grant Foundation). This is the second of two posts on research-practice partnerships – read the part one here; both posts are part of The Social Side of Reform Shanker Blog series.
In my first post on research-practice partnerships, I highlighted the need for partnerships and pointed to some potential benefits of long-term collaborations between researchers and practitioners. But how do you know when an arrangement between researchers and practitioners is a research-practice partnership? Where can people go to learn about how to form and sustain research-practice partnerships? Who funds this work?
In this post I answer these questions and point to some resources researchers and practitioners can use to develop and sustain partnerships. Read More »
One of the more visible manifestations of what I have called “informal test-based accountability” — that is, how testing results play out in the media and public discourse — is the phenomenon of superintendents, particularly big city superintendents, making their reputations based on the results during their administrations.
In general, big city superintendents are expected to promise large testing increases, and their success or failure is to no small extent judged on whether those promises are fulfilled. Several superintendents almost seem to have built entire careers on a few (misinterpreted) points in proficiency rates or NAEP scale scores. This particular phenomenon, in my view, is rather curious. For one thing, any district leader will tell you that many of their core duties, such as improving administrative efficiency, communicating with parents and the community, strengthening districts’ financial situation, etc., might have little or no impact on short-term testing gains. In addition, even those policies that do have such an impact often take many years to show up in aggregate results.
In short, judging superintendents based largely on the testing results during their tenures seems misguided. A recent report issued by the Brown Center at Brookings, and written by Matt Chingos, Grover Whitehurst and Katharine Lindquist, adds a little bit of empirical insight to this viewpoint. Read More »
Our guest author today is Bill Penuel, professor of educational psychology and learning sciences at the University of Colorado Boulder. He leads the National Center for Research in Policy and Practice, which investigates how school and district leaders use research in decision-making. Bill is co-Principal Investigator of the Research+Practice Collaboratory (funded by the National Science Foundation) and of a study about research use in research-practice partnerships (supported by the William T. Grant Foundation). This is the first of two posts on research-practice partnerships; both are part of The Social Side of Reform Shanker Blog series.
Policymakers are asking a lot of public school teachers these days, especially when it comes to the shifts in teaching and assessment required to implement new, ambitious standards for student learning. Teachers want and need more time and support to make these shifts. A big question is: What kinds of support and guidance can educational research and researchers provide?
Unfortunately, that question is not easy to answer. Most educational researchers spend much of their time answering questions that are of more interest to other researchers than to practitioners. Even if researchers did focus on questions of interest to practitioners, teachers and teacher leaders need answers more quickly than researchers can provide them. And when researchers and practitioners do try to work together on problems of practice, it takes a while for them to get on the same page about what those problems are and how to solve them. It’s almost as if researchers and practitioners occupy two different cultural worlds. Read More »
There are three general factors that determine most public school teachers’ base salaries (which are usually laid out in a table called a salary schedule). The first is where they teach; districts vary widely in how much they pay. The second factor is experience. Salary schedules normally grant teachers “step raises” or “increments” each year they remain in the district, though these raises end at some point (when teachers reach the “top step”).
The third typical factor that determines teacher salary is their level of education. Usually, teachers receive a permanent raise for acquiring additional education beyond their bachelor’s degree. Most commonly, this means a master’s degree, which roughly half of teachers have earned (though most districts award raises for accumulating a certain number of credits towards a master’s and/or a Ph.D., and for getting a Ph.D.). The raise for receiving a master’s degree varies, but just to give an idea, it is, on average, about 10 percent over the base salary of bachelor’s-only teachers.
This practice of awarding raises for teachers who earn master’s degrees has come under tremendous fire in recent years. The basic argument is that these raises are expensive, but that having a master’s degree is not associated with test-based effectiveness (i.e., is not correlated with scores from value-added models of teachers’ estimated impact on their students’ testing performance). Many advocates argue that states and districts should simply cease giving teachers raises for advanced degrees, since, they say, it makes no sense to pay teachers for a credential that is not associated with higher performance. North Carolina, in fact, passed a law last year ending these raises, and there is talk of doing the same elsewhere. Read More »
Several months ago, the American Statistical Association (ASA) released a statement on the use of value-added models in education policy. I’m a little late getting to this (and might be repeating points that others made at the time), but I wanted to comment on the statement, not only because I think it’s useful to have ASA add their perspective to the debate on this issue, but also because their statement seems to have become one of the staple citations for those who oppose the use of these models in teacher evaluations and other policies.
Some of these folks claimed that the ASA supported their viewpoint – i.e., that value-added models should play no role in accountability policy. I don’t agree with this interpretation. To be sure, the ASA authors described the limitations of these estimates, and urged caution, but I think that the statement rather explicitly reaches a more nuanced conclusion: That value-added estimates might play a useful role in education policy, as one among several measures used in formal accountability systems, but this must be done carefully and appropriately.*
Much of the statement puts forth the standard, albeit important, points about value-added (e.g., moderate stability between years/models, potential for bias, etc.). But there are, from my reading, three important takeaways that bear on the public debate about the use of these measures, which are not always so widely acknowledged. Read More »
Our guest authors today are Alan J. Daly, Professor and Chair of Education Studies at the University of California San Diego, and Kara S. Finnigan, Associate Professor at the Warner School of Education at the University of Rochester. Daly and Finnigan have published numerous articles on social network analysis in education and recently co-edited Using Research Evidence in Education: From the Schoolhouse Door to Capitol Hill (Springer, 2014), which explores the use and diffusion of different types of evidence across levels of the educational system.
Teacher evaluation is a hotly contested topic, with vigorous debate happening around issues of testing, measurement, and what is considered ‘important’ in terms of student learning, not to mention the potential high stakes decisions that may be made as a result of these assessments. At its best, this discussion has reinvigorated a national dialogue around teaching practice and research; at its worst it has polarized and entrenched stakeholder groups into rigid camps. How is it we can avoid the calcification of opinion and continue a constructive dialogue around this important and complex issue?
One way, as we suggest here, is to continue to discuss alternatives around teacher evaluation, and to be thoughtful about the role of social interactions in student outcomes, particularly as it relates to the current conversation around valued added models. It is in this spirit that we ask: Is there a ‘social side’ to a teacher’s ability to add value to their students’ growth and, if so, what are the implications for current teacher evaluation models? Read More »
Our guest author today is Travis J. Bristol, former high school English teacher in New York City public schools and teacher educator with the Boston Teacher Residency program, who is currently a research and policy fellow at the Stanford Center for Opportunity Policy in Education (SCOPE) at Stanford University.
The challenges faced by Black male teachers in schools may serve as the canary in the coalmine that begins to explain the debilitating condition faced by Black boys in schools. Black males represent 1.9% of all public school teachers yet have one of the highest rates of turnover. Attempts to increase the number of Black male teachers are based on research that suggests these new recruits can improve Black students’ schooling outcomes.
Below, I discuss my study of the school-based experiences of 27 Black male teachers in Boston Public Schools (BPS), who represent approximately 10 percent of all Black male teachers in the district. This study, which I recently discussed in Boston’s NPR news station, is one of the largest studies conducted exclusively on Black male teachers and has implications for policymakers as well as school administrators looking to recruit and retain Black male educators.
Here is a summary of the key findings. Read More »
This is the third post in a series on “The Social Side Of Reform”, exploring the idea that relationships, social capital, and social networks matter in lasting, systemic educational improvement. For more on this series, click here.
In recent posts (here and here), we have been arguing that social capital — social relations and the resources that can be accessed through them (e.g., support, knowledge) — is an enormously important component of educational improvement. In fact, I have suggested that understanding and promoting social capital in schools may be as promising as focusing on personnel (or human capital) policies such as teacher evaluation, compensation and so on.
My sense is that many teachers and principals support this argument, but I am also very interested in making the case to those who may disagree. I doubt very many people would disagree with the idea that relationships matter, but perhaps there are more than a few skeptics when it comes to how much they matter, and especially to whether or not social capital can be as powerful and practical a policy lever as human capital.
In other words, there are, most likely, those who view social capital as something that cannot really be leveraged cost-effectively with policy intervention toward any significant impact, in no small part because it focuses on promoting things that already happen and/or that cannot be mandated. For example, teachers already spend time together and cannot/should not be required to do so more often, at least not to an extent that would make a difference for student outcomes (although this could be said of almost any policy). Read More »
The so-called Vergara trial in California, in which the state’s tenure and layoff statutes were deemed unconstitutional, already has its first “spin-off,” this time in New York, where a newly-formed organization, the Partnership for Educational Justice (PEJ), is among the organizations and entities spearheading the effort.
Upon first visiting PEJ’s new website, I was immediately (and predictably) drawn to the “Research” tab. It contains five statements (which, I guess, PEJ would characterize as “facts”). Each argument is presented in the most accessible form possible, typically accompanied by one citation (or two at most). I assume that the presentation of evidence in the actual trial will be a lot more thorough than that offered on this webpage, which seems geared toward the public rather than the more extensive evidentiary requirements of the courtroom (also see Bruce Baker’s comments on many of these same issues surrounding the New York situation).
That said, I thought it might be useful to review the basic arguments and evidence PEJ presents, not really in the context of whether they will “work” in the lawsuit (a judgment I am unqualified to make), but rather because they’re very common, and also because it’s been my observation that advocates, on both “sides” of the education debate, tend to be fairly good at using data and research to describe problems and/or situations, yet sometimes fall a bit short when it comes to evidence-based discussions of what to do about them (including the essential task of acknowledging when the evidence is still undeveloped). PEJ’s five bullet points, discussed below, are pretty good examples of what I mean. Read More »
** Reprinted here in the Washington Post
This is the second post in a series on “The Social Side Of Reform”, exploring the idea that relationships, social capital, and social networks matter in lasting, systemic educational improvement. For more on this series, click here.
Debates about how to improve educational outcomes for students often involve two ‘camps’: Those who focus on the impact of “in-school factors” on student achievement; and those who focus on “out-of-school factors.” There are many in-school factors discussed but improving the quality of individual teachers (or teachers’ human capital) is almost always touted as the main strategy for school improvement. Out-of-school factors are also numerous but proponents of this view tend toward addressing broad systemic problems such as poverty and inequality.
Social capital — the idea that relationships have value, that social ties provide access to important resources like knowledge and support, and that a group’s performance can often exceed that of the sum of its members — is something that rarely makes it into the conversation. But why does social capital matter?
Research suggests that teachers’ social capital may be just as important to student learning as their human capital. In fact, some studies indicate that if school improvement policies addressed teachers’ human and social capital simultaneously, they would go a long way toward mitigating the effects of poverty on student outcomes. Sounds good, right? The problem is: Current policy does not resemble this approach. Researchers, commentators and practitioners have shown and lamented that many of the strategies leveraged to increase teachers’ human capital often do so at the expense of eroding social capital in our schools. In other words, these approaches are moving us one step forward and two steps back. Read More »
* Reprinted here in the Washington Post
This is the first post in a series on “The Social Side Of Reform”, exploring the idea that relationships, social capital, and social networks matter in lasting, systemic educational improvement. For more on this series, click here.
Our guest authors today are Kara S. Finnigan, Associate Professor at the Warner School of Education at the University of Rochester, and Alan J. Daly, Professor and Chair of Education Studies at the University of California San Diego. Finnigan and Daly have published numerous articles on social network analysis in education in academic and practitioner journals, and recently co-edited Using Research Evidence in Education: From the Schoolhouse Door to Capitol Hill (Springer, 2014), which explores the use and diffusion of different types of evidence across levels of the educational system.
There are many reforms out there; what if these ideas are not working as well as they could because educators are simply not communicating or building meaningful relationships with each other or maybe the conditions in which they do their work do not support productive interactions? These are important issues to understand and our research, some of which we highlight in this post, underscores the importance of the relational element in reform. To further explore the social side of the change equation, we draw on social network research as a way to highlight the importance of relationships as conduits through which valued resources flow and can bring about system-wide change.
A few years ago Arne Duncan noted that “[NCLB] has created a thousand ways for schools to fail and very few ways to help them succeed.” We think that may have to do with the over reliance on technical fixes, prescriptive approaches and the scant attention to the context — particularly the social context — in which reforms are implemented. But what would things look like if we took a more relational approach to educational improvement? Read More »
Anyone who follows education policy debates might hear the term “standard deviation” fairly often. Most people have at least some idea of what it means, but I thought it might be useful to lay out a quick, (hopefully) clear explanation, since it’s useful for the proper interpretation of education data and research (as well as that in other fields).
Many outcomes or measures, such as height or blood pressure, assume what’s called a “normal distribution.” Simply put, this means that such measures tend to cluster around the mean (or average), and taper off in both directions the further one moves away from the mean (due to its shape, this is often called a “bell curve”). In practice, and especially when samples are small, distributions are imperfect — e.g., the bell is messy or a bit skewed to one side — but in general, with many measures, there is clustering around the average.
Let’s use test scores as our example. Suppose we have a group of 1,000 students who take a test (scored 0-20). A simulated score distribution is presented in the figure below (called a “histogram”). Read More »