Our guest author today is Kenneth Frank, professor in Measurement and Quantitative Methods at the Department of Counseling, Educational Psychology and Special Education at Michigan State University. This column is part of The Social Side of Reform Shanker Blog series.
Maybe it’s because I grew up in Michigan, but when I think of how to improve schools, I think about the “Magic Johnson effect.” During his time at Michigan State, Earvin “Magic” Johnson scored an average of 17 points per game. Good, but many others have had higher averages. Yet, I would want Magic Johnson on my team because he made everyone around him better. Similarly, the best teachers may be those that make everyone around them better. This way of thinking is not currently the focus of many current educational reforms, which draw on individual competition and market metaphors.
So how can we leverage the Magic Johnson effect to make schools better? We have to think of ways that teachers can work together. This might be in terms of co-teaching, sharing materials, or taking the time to engage one another in honest professional dialogues. There is considerable evidence that teachers who can draw on the expertise of colleagues are better able to implement new practices. There is also evidence that when there is an atmosphere of trust teachers can engage in honest dialogues that can improve teaching practices and student achievement (e.g., Bryk and Schneider, 2002). Read More »
Earlier this year, a paper by Roderick I. Swaab and colleagues received considerable media attention (e.g., see here, here, and here). The research questioned the widely shared belief that bringing together the most talented individuals always produces the best result. The authors looked at various types of sports (e.g., player characteristics and behavior, team performance etc.), and were able to demonstrate that there is such thing as “too much talent,” and that having too many superstars can hurt overall team performance, at least when the sport requires cooperation among team members.
My immediate questions after reading the paper were: Do these findings generalize outside the world of sports and, if so, what might be the implications for education? To my surprise, I did not find much commentary or analysis addressing them. I am sure not everybody saw the paper, but I also wonder if this absence might have something to do with how teaching is generally viewed: More like baseball (i.e., a more individualistic team sport) than, say, like basketball. But in our social side of education reform series, we have been discussing a wealth of compelling research suggesting that teaching is not individualistic at all, and that schools thrive on trusting relationships and cooperation, rather than competition and individual prowess.
So, if teaching is indeed more like basketball than like baseball, what are the implications of this study for strategies and policies aimed at identifying, developing and supporting teaching quality? Read More »
Our “social side of education reform” series has emphasized that teaching is a cooperative endeavor, and as such is deeply influenced by the quality of a school’s social environment — i.e., trusting relationships, teamwork and cooperation. But what about learning? To what extent are dispositions such as motivation, persistence and engagement mediated by relationships and the social-relational context?
This is, of course, a very complex question, which can’t be addressed comprehensively here. But I would like to discuss three papers that provide some important answers. In terms of our “social side” theme, the studies I will highlight suggest that efforts to improve learning should include and leverage social-relational processes, such as how learners perceive (and relate to) — how they think they fit into — their social contexts. Finally, this research, particularly the last paper, suggests that translating this knowledge into policy may be less about top down, prescriptive regulations and more about what Stanford psychologist Gregory M. Walton has called “wise interventions” — i.e., small but precise strategies that target recursive processes (more below).
The first paper, by Lucas P. Butler and Gregory M. Walton (2013), describes the results of two experiments testing whether the perceived collaborative nature of an activity that was done individually would cause greater enjoyment of and persistence on that activity among preschoolers. Read More »
A few weeks ago, the Minneapolis Star Tribune published teacher evaluation results for the district’s public school teachers in 2013-14. This decision generated a fair amount of controversy, but it’s worth noting that the Tribune, unlike the Los Angeles Times and New York City newspapers a few years ago, did not publish scores for individual teachers, only totals by school.
The data once again provide an opportunity to take a look at how results vary by student characteristics. This was indeed the focus of the Tribune’s story, which included the following headline: “Minneapolis’ worst teachers are in the poorest schools, data show.” These types of conclusions, which simply take the results of new evaluations at face value, have characterized the discussion since the first new systems came online. Though understandable, they are also frustrating and a potential impediment to the policy process. At this early point, “the city’s teachers with the lowest evaluation ratings” is not the same thing as “the city’s worst teachers.” Actually, as discussed in a previous post, the systematic variation in evaluation results by student characteristics, which the Tribune uses to draw conclusions about the distribution of the city’s “worst teachers,” could just as easily be viewed as one of the many ways that one might assess the properties and even the validity of those results.
So, while there are no clear-cut “right” or “wrong” answers here, let’s take a quick look at the data and what they might tell us. Read More »
Our guest authors today are Kara S. Finnigan, Associate Professor at the Warner School of Education at the University of Rochester, and Alan J. Daly, Professor and Chair of Education Studies at the University of California San Diego. Finnigan and Daly recently co-edited Using Research Evidence in Education: From the Schoolhouse Door to Capitol Hill (Springer, 2014), which explores the use and diffusion of different types of evidence across levels of the educational system. This column is part of The Social Side of Reform Shanker Blog series.
“All the world’s a stage, and all the men and women merely players; they have their exits and their entrances.” William Shakespeare – As You Like It
All too often in districts under intense accountability pressures, exits and entrances happen frequently and repeatedly. One might conceptualize the work of district reform as a play in which actors are beginning to learn their lines and block places on the stage but, just as the play is underway, some key actors leave and others join, causing disruption to the performance. Now, if all of those who leave or join have smaller roles, the disruption may be less extreme, but if most are lead actors or the director or even the head of costume design, you’d likely have to push back opening night. Read More »
** Reprinted here in the Washington Post
Our guest authors today are Carrie R. Leana, George H. Love Professor of Organizations and Management, Professor of Business Administration, Medicine, and Public and International Affairs, and Director of the Center for Health and Care Work, at the University of Pittsburgh, and Frits K. Pil, Professor of Business Administration at the Katz Graduate School of Business and research scientist at the Learning Research and Development Center, at the University of Pittsburgh. This column is part of The Social Side of Reform Shanker Blog series.
Most current models of school reform focus on teacher accountability for student performance measured via standardized tests, “improved” curricula, and what economists label “human capital” – e.g., factors such as teacher experience, subject knowledge and pedagogical skills. But our research over many years in several large school districts suggests that if students are to show real and sustained learning, schools must also foster what sociologists label “social capital” – the value embedded in relations among teachers, and between teachers and school administrators. Social capital is the glue that holds a school together. It complements teacher skill, it enhances teachers’ individual classroom efforts, and it enables collective commitment to bring about school-wide change.
We are professors at a leading Business School who have conducted research in a broad array of settings, ranging from steel mills and auto plants to insurance offices, banks, and even nursing homes. We examine how formal and informal work practices enhance organizational learning and performance. What we have found over and over again is that, regardless of context, organizational success rarely stems from the latest technology or a few exemplary individuals. Read More »
The Foundation for Excellence in Education, an organization that advocates for education reform in Florida, in particular the set of policies sometimes called the “Florida Formula,” recently announced a competition to redesign the “appearance, presentation and usability” of the state’s school report cards. Winners of the competition will share prize money totaling $35,000.
The contest seems like a great idea. Improving the manner in which education data are presented is, of course, a laudable goal, and an open competition could potentially attract a diverse group of talented people. As regular readers of this blog know, however, I am not opposed to sensibly-designed test-based accountability policies, but my primary concern about school rating systems is focused mostly on the quality and interpretation of the measures used therein. So, while I support the idea of a competition for improving the design of the report cards, I am hoping that the end result won’t just be a very attractive, clever instrument devoted to the misinterpretation of testing data.
In this spirit, I would like to submit four simple graphs that illustrate, as clearly as possible and using the latest data from 2014, what Florida’s school grades are actually telling us. Since the scoring and measures vary a bit between different types of schools, let’s focus on elementary schools. Read More »
Our guest author today is Bill Penuel, professor of educational psychology and learning sciences at the University of Colorado Boulder. He leads the National Center for Research in Policy and Practice, which investigates how school and district leaders use research in decision-making. Bill is co-Principal Investigator of the Research+Practice Collaboratory (funded by the National Science Foundation) and of a study about research use in research-practice partnerships (supported by the William T. Grant Foundation). This is the second of two posts on research-practice partnerships – read the part one here; both posts are part of The Social Side of Reform Shanker Blog series.
In my first post on research-practice partnerships, I highlighted the need for partnerships and pointed to some potential benefits of long-term collaborations between researchers and practitioners. But how do you know when an arrangement between researchers and practitioners is a research-practice partnership? Where can people go to learn about how to form and sustain research-practice partnerships? Who funds this work?
In this post I answer these questions and point to some resources researchers and practitioners can use to develop and sustain partnerships. Read More »
In observing all the recent controversy surrounding the Common Core State Standards (CCSS), I have noticed that one of the frequent criticisms from one of the anti-CCSS camps, particularly since the first rounds of results from CCSS-aligned tests have started to be released, is that the standards are going to be used to label more schools as “failing,” and thus ramp up the test-based accountability regime in U.S. public education.
As someone who is very receptive to a sensible, well-designed dose of test-based accountability, but sees so little of it in current policy, I am more than sympathetic to concerns about the proliferation and misuse of high-stakes testing. On the other hand, anti-CCSS arguments that focus on testing or testing results are not really arguments against the standards per se. They also strike me as ironic, as they are based on the same flawed assumptions that critics of high-stakes testing should be opposing.
Standards themselves are about students. They dictate what students should know at different points in their progression through the K-12 system. Testing whether students meet those standards makes sense, but how we use those test results is not dictated by the standards. Nor do standards require us to set bars for “proficient,” “advanced,” etc., using the tests. Read More »
Uplifting Leadership, Andrew Hargreaves’ new book with coauthors Alan Boyle and Alma Harris, is based on a seven-year international study, and illustrates how leaders from diverse organizations were able to lift up their teams by harnessing and balancing qualities that we often view as opposites, such as dreaming and action, creativity and discipline, measurement and meaningfulness, and so on.
Chapter three, Collaboration With Competition, was particularly interesting to me and relevant to our series, “The Social Side of Reform.” In that series, we’ve been highlighting research that emphasizes the value of collaboration and considers extreme competition to be counterproductive. But, is that always the case? Can collaboration and competition live under the same roof and, in combination, promote systemic improvement? Could, for example, different types of schools serving (or competing for) the same students work in cooperative ways for the greater good of their communities?
Hargreaves and colleagues believe that establishing this environment is difficult but possible, and that it has already happened in some places. In fact, Al Shanker was one of the first proponents of a model that bears some similarity. In this post, I highlight some ideas and illustrations from Uplifting Leadership and tie them to Shanker’s own vision of how charter schools, conceived as idea incubators and, eventually, as innovations within the public school system, could potentially lift all students and the entire system, from the bottom up, one group of teachers at a time. Read More »
In the most simplistic portrayal of the education policy landscape, one of the “sides” is a group of people who are referred to as “reformers.” Though far from monolithic, these people tend to advocate for test-based accountability, charters/choice, overhauling teacher personnel rules, and other related policies, with a particular focus on high expectations, competition and measurement. They also frequently see themselves as in opposition to teachers’ unions.
Most of the “reformers” I have met and spoken with are not quite so easy to categorize. They are also thoughtful and open to dialogue, even when we disagree. And, at least in my experience, there is far more common ground than one might expect.
Nevertheless, I believe that this “movement” (to whatever degree you can characterize it in those terms) may be doomed to stall out in the long run, not because their ideas are all bad, and certainly not because they lack the political skills and resources to get their policies enacted. Rather, they risk failure for a simple reason: They too often make promises that they cannot keep. Read More »
Our guest author today is Bill Penuel, professor of educational psychology and learning sciences at the University of Colorado Boulder. He leads the National Center for Research in Policy and Practice, which investigates how school and district leaders use research in decision-making. Bill is co-Principal Investigator of the Research+Practice Collaboratory (funded by the National Science Foundation) and of a study about research use in research-practice partnerships (supported by the William T. Grant Foundation). This is the first of two posts on research-practice partnerships; both are part of The Social Side of Reform Shanker Blog series.
Policymakers are asking a lot of public school teachers these days, especially when it comes to the shifts in teaching and assessment required to implement new, ambitious standards for student learning. Teachers want and need more time and support to make these shifts. A big question is: What kinds of support and guidance can educational research and researchers provide?
Unfortunately, that question is not easy to answer. Most educational researchers spend much of their time answering questions that are of more interest to other researchers than to practitioners. Even if researchers did focus on questions of interest to practitioners, teachers and teacher leaders need answers more quickly than researchers can provide them. And when researchers and practitioners do try to work together on problems of practice, it takes a while for them to get on the same page about what those problems are and how to solve them. It’s almost as if researchers and practitioners occupy two different cultural worlds. Read More »
There are three general factors that determine most public school teachers’ base salaries (which are usually laid out in a table called a salary schedule). The first is where they teach; districts vary widely in how much they pay. The second factor is experience. Salary schedules normally grant teachers “step raises” or “increments” each year they remain in the district, though these raises end at some point (when teachers reach the “top step”).
The third typical factor that determines teacher salary is their level of education. Usually, teachers receive a permanent raise for acquiring additional education beyond their bachelor’s degree. Most commonly, this means a master’s degree, which roughly half of teachers have earned (though most districts award raises for accumulating a certain number of credits towards a master’s and/or a Ph.D., and for getting a Ph.D.). The raise for receiving a master’s degree varies, but just to give an idea, it is, on average, about 10 percent over the base salary of bachelor’s-only teachers.
This practice of awarding raises for teachers who earn master’s degrees has come under tremendous fire in recent years. The basic argument is that these raises are expensive, but that having a master’s degree is not associated with test-based effectiveness (i.e., is not correlated with scores from value-added models of teachers’ estimated impact on their students’ testing performance). Many advocates argue that states and districts should simply cease giving teachers raises for advanced degrees, since, they say, it makes no sense to pay teachers for a credential that is not associated with higher performance. North Carolina, in fact, passed a law last year ending these raises, and there is talk of doing the same elsewhere. Read More »
Several months ago, the American Statistical Association (ASA) released a statement on the use of value-added models in education policy. I’m a little late getting to this (and might be repeating points that others made at the time), but I wanted to comment on the statement, not only because I think it’s useful to have ASA add their perspective to the debate on this issue, but also because their statement seems to have become one of the staple citations for those who oppose the use of these models in teacher evaluations and other policies.
Some of these folks claimed that the ASA supported their viewpoint – i.e., that value-added models should play no role in accountability policy. I don’t agree with this interpretation. To be sure, the ASA authors described the limitations of these estimates, and urged caution, but I think that the statement rather explicitly reaches a more nuanced conclusion: That value-added estimates might play a useful role in education policy, as one among several measures used in formal accountability systems, but this must be done carefully and appropriately.*
Much of the statement puts forth the standard, albeit important, points about value-added (e.g., moderate stability between years/models, potential for bias, etc.). But there are, from my reading, three important takeaways that bear on the public debate about the use of these measures, which are not always so widely acknowledged. Read More »
Our guest authors today are Alan J. Daly, Professor and Chair of Education Studies at the University of California San Diego, and Kara S. Finnigan, Associate Professor at the Warner School of Education at the University of Rochester. Daly and Finnigan have published numerous articles on social network analysis in education and recently co-edited Using Research Evidence in Education: From the Schoolhouse Door to Capitol Hill (Springer, 2014), which explores the use and diffusion of different types of evidence across levels of the educational system.
Teacher evaluation is a hotly contested topic, with vigorous debate happening around issues of testing, measurement, and what is considered ‘important’ in terms of student learning, not to mention the potential high stakes decisions that may be made as a result of these assessments. At its best, this discussion has reinvigorated a national dialogue around teaching practice and research; at its worst it has polarized and entrenched stakeholder groups into rigid camps. How is it we can avoid the calcification of opinion and continue a constructive dialogue around this important and complex issue?
One way, as we suggest here, is to continue to discuss alternatives around teacher evaluation, and to be thoughtful about the role of social interactions in student outcomes, particularly as it relates to the current conversation around valued added models. It is in this spirit that we ask: Is there a ‘social side’ to a teacher’s ability to add value to their students’ growth and, if so, what are the implications for current teacher evaluation models? Read More »