The District of Columbia Public Charter School Board (PCSB) recently released the 2014 results of their “Performance Management Framework” (PMF), which is the rating system that the PCSB uses for its schools.
Very quick background: This system sorts schools into one of three “tiers,” with Tier 1 being the highest-performing, as measured by the system, and Tier 3 being the lowest. The ratings are based on a weighted combination of four types of factors — progress, achievement, gateway, and leading — which are described in detail in the first footnote.* As discussed in a previous post, the PCSB system, in my opinion, is better than many others out there, since growth measures play a fairly prominent role in the ratings, and, as a result, the final scores are only moderately correlated with key student characteristics such as subsidized lunch eligibility.** In addition, the PCSB is quite diligent about making the PMF results accessible to parents and other stakeholders, and, for the record, I have found the staff very open to sharing data and answering questions.
That said, PCSB’s big message this year was that schools’ ratings are improving over time, and that, as a result, a substantially larger proportion of DC charter students are attending top-rated schools. This was reported uncritically by several media outlets, including this story in the Washington Post. It is also based on a somewhat questionable use of the data. Let’s take a very simple look at the PMF dataset, first to examine this claim and then, more importantly, to see what we can learn about the PMF and DC charter schools in 2013 and 2014. Read More »
Our “social side of education reform” series has emphasized that teaching is a cooperative endeavor, and as such is deeply influenced by the quality of a school’s social environment — i.e., trusting relationships, teamwork and cooperation. But what about learning? To what extent are dispositions such as motivation, persistence and engagement mediated by relationships and the social-relational context?
This is, of course, a very complex question, which can’t be addressed comprehensively here. But I would like to discuss three papers that provide some important answers. In terms of our “social side” theme, the studies I will highlight suggest that efforts to improve learning should include and leverage social-relational processes, such as how learners perceive (and relate to) — how they think they fit into — their social contexts. Finally, this research, particularly the last paper, suggests that translating this knowledge into policy may be less about top down, prescriptive regulations and more about what Stanford psychologist Gregory M. Walton has called “wise interventions” — i.e., small but precise strategies that target recursive processes (more below).
The first paper, by Lucas P. Butler and Gregory M. Walton (2013), describes the results of two experiments testing whether the perceived collaborative nature of an activity that was done individually would cause greater enjoyment of and persistence on that activity among preschoolers. Read More »
So-called achievement gaps – the differences in average test performance among student subgroups, usually defined in terms of ethnicity or income – are important measures. They demonstrate persistent inequality of educational outcomes and economic opportunities between different members of our society.
So long as these gaps remain, it means that historically lower-performing subgroups (e.g., low-income students or ethnic minorities) are less likely to gain access to higher education, good jobs, and political voice. We should monitor these gaps; try to identify all the factors that affect them, for good and for ill; and endeavor to narrow them using every appropriate policy lever – both inside and outside of the educational system.
Achievement gaps have also, however, taken on a very different role over the past 10 or so years. The sizes of gaps, and extent of “gap closing,” are routinely used by reporters and advocates to judge the performance of schools, school districts, and states. In addition, gaps and gap trends are employed directly in formal accountability systems (e.g., states’ school grading systems), in which they are conceptualized as performance measures.
Although simple measures of the magnitude of or changes in achievement gaps are potentially very useful in several different contexts, they are poor gauges of school performance, and shouldn’t be the basis for high-stakes rewards and punishments in any accountability system. Read More »
A few weeks ago, the Minneapolis Star Tribune published teacher evaluation results for the district’s public school teachers in 2013-14. This decision generated a fair amount of controversy, but it’s worth noting that the Tribune, unlike the Los Angeles Times and New York City newspapers a few years ago, did not publish scores for individual teachers, only totals by school.
The data once again provide an opportunity to take a look at how results vary by student characteristics. This was indeed the focus of the Tribune’s story, which included the following headline: “Minneapolis’ worst teachers are in the poorest schools, data show.” These types of conclusions, which simply take the results of new evaluations at face value, have characterized the discussion since the first new systems came online. Though understandable, they are also frustrating and a potential impediment to the policy process. At this early point, “the city’s teachers with the lowest evaluation ratings” is not the same thing as “the city’s worst teachers.” Actually, as discussed in a previous post, the systematic variation in evaluation results by student characteristics, which the Tribune uses to draw conclusions about the distribution of the city’s “worst teachers,” could just as easily be viewed as one of the many ways that one might assess the properties and even the validity of those results.
So, while there are no clear-cut “right” or “wrong” answers here, let’s take a quick look at the data and what they might tell us. Read More »
Our guest author today is Connie Williams, a National Board Certified Teacher librarian at Petaluma High School in Petaluma, CA, past president of the California School Library Association, and co-developer of the librarian and teacher 2.0 classroom tutorials.
Down the road from where I live, on the first-of-the month, a group of vintage car owners gather for a “cars and coffee” meet up. The cars that show up with their drivers cover many years and obsessions. Drivers park, open up the car hoods and take a few steps back and begin talking with other car owners and visitors who happen by. These are people who are interested in the way cars work, their history, and they all have stories to share.
How do they know so much about their cars? They work on them – gaining insight by hands-on practice and consultations with experts. If they’re wealthy enough, they pay someone else to do the work, yet they don’t just hand over their cars to them. They read about them, participate in on-line groups, ask for guidance, and they drive them. Most often, when they drive them, someone stops and asks questions about their cars and they teach what they know to others.
This is an example of the kind of learning we would hope for, for all our students – a passion that is ignited and turns into knowledge that is grown, developed, and shared. In this sense, it is inquiry – asking questions and taking the required steps to answer them – that is at the heart of learning. Read More »
The State of Florida is currently engaged in a policy tussle of sorts with the U.S. Department of Education (USED) over Florida’s accountability system. To make a long story short, last spring, Florida passed a law saying that the test scores of English language learners (ELLs) would only count toward schools’ accountability grades (and teacher evaluations) once the ELL students had been in the system for at least two years. This runs up against federal law, which requires that ELLs’ scores be counted after only one year, and USED has indicated that it’s not willing to budge on this requirement. In response, Florida is considering legal action.
This conflict might seem incredibly inane (unless you’re in one of the affected schools, of course). Beneath the surface, though, this is actually kind of an amazing story.
Put simply, Florida’s argument against USED’s policy of counting ELL scores after just one year is a perfect example of the reason why most of the state’s core accountability measures (not to mention those of NCLB as a whole) are so inappropriate: Because they judge schools’ performance based largely on where their students’ scores end up without paying any attention to where they start out. Read More »
A new Mathematica report examines the test-based impact of The Equity Project (TEP), a New York City charter school serving grades 5-8. TEP opened up for the 2009-10 school year, receiving national attention mostly due to one unusual policy: They paid teachers $125,000 per year, regardless of experience and education, in addition to annual bonuses (up to $25,000) for returning teachers. TEP largely makes up for these unusually high salary costs by minimizing the number of administrators and maintaining larger class sizes.
As is typical of Mathematica, the TEP analysis is thorough and well-done. The school’s students’ performance is compared to that of similar peers with a comparable probability of enrolling in TEP, as identified with propensity scores. In general, the study’s results were quite positive. Although there were statistically discernible negative impacts of attendance for TEP’s first cohort of students during their first two years, the cumulative estimated test-based impact was significant, positive and educationally meaningful after three and four years of attendance. As always, the estimated effect was stronger in math than in reading (estimated effect sizes for the former were very large in magnitude). The Mathematica researchers also present analyses on student attrition, which did not appear to bias the estimates substantially, and they also show that their primary results are robust when using alternative specifications (e.g., different matching techniques, score transformations, etc.).
Now we get to the tricky questions about these results: What caused them and what can be learned as a result? That’s the big issue with charter analyses in general (and with research on many other interventions): One can almost never separate the “why” from the “what” with any degree of confidence. And TEP, with its “flagship policy” of high teacher salaries, which might appeal to all “sides” in the education policy debate, provides an interesting example in this respect. Read More »
Our guest authors today are Kara S. Finnigan, Associate Professor at the Warner School of Education at the University of Rochester, and Alan J. Daly, Professor and Chair of Education Studies at the University of California San Diego. Finnigan and Daly recently co-edited Using Research Evidence in Education: From the Schoolhouse Door to Capitol Hill (Springer, 2014), which explores the use and diffusion of different types of evidence across levels of the educational system. This column is part of The Social Side of Reform Shanker Blog series.
“All the world’s a stage, and all the men and women merely players; they have their exits and their entrances.” William Shakespeare – As You Like It
All too often in districts under intense accountability pressures, exits and entrances happen frequently and repeatedly. One might conceptualize the work of district reform as a play in which actors are beginning to learn their lines and block places on the stage but, just as the play is underway, some key actors leave and others join, causing disruption to the performance. Now, if all of those who leave or join have smaller roles, the disruption may be less extreme, but if most are lead actors or the director or even the head of costume design, you’d likely have to push back opening night. Read More »
In this “Where We Stand” column, which was printed in the New York Times on March 27, 1983, Al Shanker quotes historian Paul Gagnon to argue that we need to think long-term about the purposes of public schooling and agree on a carefully chosen set of education reform priorities. Failing this, they warn, the U.S. will forever be caught in a churn of futile, quick-fix reform initiatives.
It never fails. Whenever there’s an educational problem, there’s always an attempt to solve it with a quick fix. The current problem – the shortage of science and math teachers – is no exception. A quick fix just won’t work. Of course, there are a few things that can be done to ease the problem. The most promising short-run idea is to encourage teachers already teaching in other fields but who have a good background in math and science to switch.
But we won’t solve the problem until we know why we have one. It is not just that private industry pays more. It’s that there aren’t enough students graduating from college in these fields to satisfy the needs of business and the teaching profession. Most students stay away from math and science in college because they didn’t get enough of a background in high school. Why? Because math and science course are more difficult than many electives, and most high school students, given a choice between tough courses and easy ones, choose the latter. And it doesn’t start there. It goes back to elementary school, and not just with respect to math and science but with the ability to read problems and think them through … willingness to discipline oneself, to work long and hard. Read More »
The College Board recently released the latest SAT results, for the first time combining this release with that of data from the PSAT and AP exams. The release of these data generated the usual stream of news coverage, much of which misinterpreted the year-to-year changes in SAT scores as a lack of improvement, even though the data are cross-sectional and the test-taking sample has been changing, and/or misinterpreted the percent of test takers who scored above the “college ready” line as a national measure of college readiness, even though the tests are not administered to a representative sample of students.
It is disheartening to watch this annual exercise, in which the most common “take home” headlines (e.g., “no progress in SAT scores” and “more, different students take SAT”) are in many important respects contradictory. In past years, much of the blame had to be placed on the College Board’s presentation of the data. This year, to their credit, the roll-out is substantially better (hopefully, this will continue).
But I don’t want to focus on this aspect of the organization’s activities (see this post for more); instead, I would like to discuss briefly the College Board’s recent change in mission. Read More »
** Reprinted here in the Washington Post
Our guest authors today are Carrie R. Leana, George H. Love Professor of Organizations and Management, Professor of Business Administration, Medicine, and Public and International Affairs, and Director of the Center for Health and Care Work, at the University of Pittsburgh, and Frits K. Pil, Professor of Business Administration at the Katz Graduate School of Business and research scientist at the Learning Research and Development Center, at the University of Pittsburgh. This column is part of The Social Side of Reform Shanker Blog series.
Most current models of school reform focus on teacher accountability for student performance measured via standardized tests, “improved” curricula, and what economists label “human capital” – e.g., factors such as teacher experience, subject knowledge and pedagogical skills. But our research over many years in several large school districts suggests that if students are to show real and sustained learning, schools must also foster what sociologists label “social capital” – the value embedded in relations among teachers, and between teachers and school administrators. Social capital is the glue that holds a school together. It complements teacher skill, it enhances teachers’ individual classroom efforts, and it enables collective commitment to bring about school-wide change.
We are professors at a leading Business School who have conducted research in a broad array of settings, ranging from steel mills and auto plants to insurance offices, banks, and even nursing homes. We examine how formal and informal work practices enhance organizational learning and performance. What we have found over and over again is that, regardless of context, organizational success rarely stems from the latest technology or a few exemplary individuals. Read More »
The Foundation for Excellence in Education, an organization that advocates for education reform in Florida, in particular the set of policies sometimes called the “Florida Formula,” recently announced a competition to redesign the “appearance, presentation and usability” of the state’s school report cards. Winners of the competition will share prize money totaling $35,000.
The contest seems like a great idea. Improving the manner in which education data are presented is, of course, a laudable goal, and an open competition could potentially attract a diverse group of talented people. As regular readers of this blog know, however, I am not opposed to sensibly-designed test-based accountability policies, but my primary concern about school rating systems is focused mostly on the quality and interpretation of the measures used therein. So, while I support the idea of a competition for improving the design of the report cards, I am hoping that the end result won’t just be a very attractive, clever instrument devoted to the misinterpretation of testing data.
In this spirit, I would like to submit four simple graphs that illustrate, as clearly as possible and using the latest data from 2014, what Florida’s school grades are actually telling us. Since the scoring and measures vary a bit between different types of schools, let’s focus on elementary schools. Read More »
The following is written by Kinga Wysieńska-Di Carlo and Matthew Di Carlo. Wysieńska-Di Carlo is an Assistant Professor of Sociology in the Institute of Philosophy and Sociology at the Polish Academy of Sciences.
Economic returns to education — that is, the value of investment in education, principally in terms of better jobs, earnings, etc. — rightly receives a great deal of attention in the U.S., as well as in other nations. But it is also useful to examine what people believe about the value and importance of education, as these perceptions influence, among other outcomes, individuals’ decisions to pursue additional schooling.
When it comes to beliefs regarding whether education and other factors contribute to success, economic or otherwise, Poland is a particularly interesting nation. Poland underwent a dramatic economic transformation during and after the collapse of Communism (you can read about Al Shanker’s role here). An aggressive program of reform, sometimes described as “shock therapy,” dismantled the planned socialist economy and built a market economy in its place. Needless to say, actual conditions in a nation can influence and reflect attitudes about those conditions (see, for example, Kunovich and Słomczyński 2007 for a cross-national analysis of pro-meritocratic beliefs).
This transition in Poland fundamentally reshaped the relationships between education, employment and material success. In addition, it is likely to have influenced Poles’ perception of these dynamics. Let’s take a look at Polish survey data since the transformation, focusing first on Poles’ perceptions of the importance of education for one’s success.
Read More »
Our guest author today is Bill Penuel, professor of educational psychology and learning sciences at the University of Colorado Boulder. He leads the National Center for Research in Policy and Practice, which investigates how school and district leaders use research in decision-making. Bill is co-Principal Investigator of the Research+Practice Collaboratory (funded by the National Science Foundation) and of a study about research use in research-practice partnerships (supported by the William T. Grant Foundation). This is the second of two posts on research-practice partnerships – read the part one here; both posts are part of The Social Side of Reform Shanker Blog series.
In my first post on research-practice partnerships, I highlighted the need for partnerships and pointed to some potential benefits of long-term collaborations between researchers and practitioners. But how do you know when an arrangement between researchers and practitioners is a research-practice partnership? Where can people go to learn about how to form and sustain research-practice partnerships? Who funds this work?
In this post I answer these questions and point to some resources researchers and practitioners can use to develop and sustain partnerships. Read More »
There’s no reason why insisting on proper causal inference can’t be fun.
A weeks ago, ASCD published a policy brief (thanks to Chad Aldeman for flagging it), the purpose of which is to argue that it is “grossly misleading” to make a “direct connection” between nations’ test scores and their economic strength.
On the one hand, it’s implausible to assert that better educated nations aren’t stronger economically. On the other hand, I can certainly respect the argument that test scores are an imperfect, incomplete measure, and the doomsday rhetoric can sometimes get out of control.
In any case, though, the primary piece of evidence put forth in the brief was the eye-catching graph below, which presented trends in NAEP versus those in U.S. GDP and productivity. Read More »