One can often hear opponents of value-added referring to these methods as “junk science.” The term is meant to express the argument that value-added is unreliable and/or invalid, and that its scientific “façade” is without merit.
Now, I personally am not opposed to using these estimates in evaluations and other personnel policies, but I certainly understand opponents’ skepticism. For one thing, there are some states and districts in which design and implementation has been somewhat careless, and, in these situations, I very much share the skepticism. Moreover, the common argument that evaluations, in order to be “meaningful,” must consist of value-added measures in a heavily-weighted role (e.g., 45-50 percent) is, in my view, unsupportable.
All that said, calling value-added “junk science” completely obscures the important issues. The real questions here are less about the merits of the models per se than how they’re being used. Read More »
One of the more thoughtful voices in education, Larry Cuban, has delivered an interesting brief for the argument that there is no such thing as a “corporate reform movement.” While he acknowledges that America’s corporate elite largely share a view of how to reform America’s schools, focused on the creation of educational marketplaces and business-model schools as the engines of change, Cuban argues that it is mistake to overstate the homogeneity of perspectives and purposes. The power players of the reform movement have “varied, not uniform motives,” are “drawn from overlapping, but distinct spheres of influence,” and “vary in their aims and strategies.” The use of a term such as “corporate education reform” suggests “far more coherence and concerted action than occurs in the real world of politics and policymaking.”
Cuban’s argument amalgamates two different senses of the term “corporate education reform” – the notion that there is a movement for education reform led by corporate elites and the idea that there is a movement for education reform that seeks to remake public education in the image and likeness of for-profit corporations in a competitive marketplace.
In co-mingling these two distinct senses of the term, Cuban is adopting a common usage. And it is a usage not entirely without justification: many of the strongest advocates for transforming public schools into educational corporations are found in the corporate elite. But it is vital, I will argue here, that we separate these two conceptions of “corporate education reform” if we are to adequately understand the complexity of the political terrain on which the battles over the future of public education are being fought. Read More »
Controversial proposals for new teacher evaluation systems have generated a tremendous amount of misinformation. It has come from both “sides,” ranging from minor misunderstandings to gross inaccuracies. Ostensibly to address some of these misconceptions, the advocacy group Students First (SF) recently released a “myth/fact sheet” on evaluations.
Despite the need for oversimplification inherent in “myth/fact” sheets, the genre can be useful, especially about topics such as evaluation, about which there is much confusion. When advocacy groups produce them, however, the myths and facts sometimes take the form of “arguments we don’t like versus arguments we do like.”
This SF document falls into that trap. In fact, several of its claims are a little shocking. I would still like to discuss the sheet, not because I enjoy picking apart the work of others (I don’t), but rather because I think elements of both the “myths” and “facts” in this sheet could be recast as “dual myths” in a new sheet. That is, this document helps to illustrate how, in many of our most heated education debates, the polar opposite viewpoints that receive the most attention are often both incorrect, or at least severely overstated, and usually serve to preclude more productive, nuanced discussions.
Let’s take all four of SF’s “myth/fact” combinations in turn. Read More »
In a story for Education Week, always reliable Stephen Sawchuk reports on what may be a trend in states’ first results from their new teacher evaluation systems: The ratings are skewed toward the top.
For example, the article notes that, in Michigan, Florida and Georgia, a high proportion of teachers (more than 90 percent) received one of the two top ratings (out of four or five). This has led to some grumbling among advocates and others, citing similarities between these results and those of the old systems, in which the vast majority of teachers were rated “satisfactory,” and very few were found to be “unsatisfactory.”
Differentiation is very important in teacher evaluations – it’s kind of the whole point. Thus, it’s a problem when ratings are too heavily concentrated toward one end of the distribution. However, as Aaron Pallas points out, these important conversations about evaluation results sometimes seem less focused on good measurement or even the spread of teachers across categories than on the narrower question of how many teachers end up with the lowest rating – i.e., how many teachers will be fired.
Read More »
In a Slate article published last October, Daniel Engber bemoans the frequently shallow use of the classic warning that “correlation does not imply causation.” Mr. Engber argues that the correlation/causation distinction has become so overused in online comments sections and other public fora as to hinder real debate. He also posits that correlation does not mean causation, but “it sure as hell provides a hint,” and can “set us down the path toward thinking through the workings of reality.”
Correlations are extremely useful, in fact essential, for guiding all kinds of inquiry. And Engber is no doubt correct that the argument is overused in public debates, often in lieu of more substantive comments. But let’s also be clear about something – careless causal inferences likely do more damage to the quality and substance of policy debates on any given day than the misuse of the correlation/causation argument does over the course of months or even years.
We see this in education constantly. For example, mayors and superintendents often claim credit for marginal increases in testing results that coincide with their holding office. The causal leaps here are pretty stunning. Read More »
A recent article in Reuters, one that received a great deal of attention, sheds light on practices that some charter schools are using essentially to screen students who apply for admission. These policies include requiring long and difficult applications, family interviews, parental contracts, and even demonstrations of past academic performance.
It remains unclear how common these practices might be in the grand scheme of things, but regardless of how frequently they occur, most of these tactics are terrible, perhaps even illegal, and should be stopped. At the same time, there are two side points to keep in mind when you hear about charges such as these, as well as the accusations (and denials) of charter exclusion and segregation that tend to follow.
The first is that some degree of (self-)sorting and segregation of students by abilities, interests and other characteristics is part of the deal in a choice-based system. The second point is that screening and segregation are most certainly not unique to charter/private schools, and one primary reason is that there is, in a sense, already a lot of choice among regular public schools. Read More »
** Reprinted here in the Washington Post
In a recent post, Kevin Drum of Mother Jones discusses his growing skepticism about the research behind market-based education reform, and about the claims that supporters of these policies make. He cites a recent Los Angeles Times article, which discusses how, in 2000, the San Jose Unified School District in California instituted a so-called “high expectations” policy requiring all students to pass the courses necessary to attend state universities. The reported percentage of students passing these courses increased quickly, causing the district and many others to declare the policy a success. In 2005, Los Angeles Unified, the nation’s second largest district, adopted similar requirements.
For its part, the Times performed its own analysis, and found that the San Jose pass rate was actually no higher in 2011 compared with 2000 (actually, slightly lower for some subgroups), and that the district had overstated its early results by classifying students in a misleading manner. Mr. Drum, reviewing these results, concludes: “It turns out it was all a crock.”
In one sense, that’s true – the district seems to have reported misleading data. On the other hand, neither San Jose Unified’s original evidence (with or without the misclassification) nor the Times analysis is anywhere near sufficient for drawing conclusions – “crock”-based or otherwise – about the effects of this policy. This illustrates the deeper problem here, which is less about one “side” or the other misleading with research, but rather something much more difficult to address: Common misconceptions that impede deciphering good evidence from bad.
Read More »
In October of last year, the education advocacy group ConnCAN published a report called “The Roadmap to Closing the Gap” in Connecticut. This report says that the state must close its large achievement gaps by 2020 – that is, within eight years – and they use to data to argue that this goal is “both possible and achievable.”
There is value in compiling data and disaggregating them by district and school. And ConnCAN, to its credit, doesn’t use this analysis as a blatant vehicle to showcase its entire policy agenda, as advocacy organizations often do. But I am compelled to comment on this report, mostly as a springboard to a larger point about expectations.
However, first things first – a couple of very quick points about the analysis. There are 60-70 pages of district-by-district data in this report, all of it portrayed as a “roadmap” to closing Connecticut’s achievement gap. But it doesn’t measure gaps and won’t close them. Read More »
** Reprinted here in the Washington Post
In a recent Washington Post article called “Teachers leaning in favor of reforms,” veteran reporter Jay Mathews puts forth an argument that one hears rather frequently – that teachers are “changing their minds,” in a favorable direction, about the current wave of education reform. Among other things, Mr. Mathews cites two teacher surveys. One of them, which we discussed here, is a single-year survey that doesn’t actually look at trends, and therefore cannot tell us much about shifts in teachers’ attitudes over time (it was also a voluntary online survey).
His second source, on the other hand, is in fact a useful means of (cautiously) assessing such trends (though the article doesn’t actually look at them). That is the Education Sector survey of a nationally-representative sample of U.S. teachers, which they conducted in 2003, 2007 and, most recently, in 2011.
This is a valuable resource. Like other teacher surveys, it shows that educators’ attitudes toward education policy are diverse. Opinions vary by teacher characteristics, context and, of course, by the policy being queried. Moreover, views among teachers can (and do) change over time, though, when looking at cross-sectional surveys, one must always keep in mind that observed changes (or lack thereof) might be due in part to shifts in the characteristics of the teacher workforce. There’s an important distinction between changing minds and changing workers (which Jay Mathews, to his great credit, discusses in this article).*
That said, when it comes to the many of the more controversial reforms happening in the U.S., those about which teachers might be “changing their minds,” the results of this particular survey suggest, if anything, that teachers’ attitudes are actually quite stable. Read More »
I’m a big fan of surveys of teachers’ opinions of education policy, not only because of educators’ valuable policy-relevant knowledge, but also because their views are sometimes misrepresented or disregarded in our public discourse.
For instance, the diverse set of ideas that might be loosely characterized as “market-based reform” faces a bit of tension when it comes to teacher support. Without question, some teachers support the more controversial market-based policy ideas, such as pay and evaluations based substantially on test scores, but most do not. The relatively low levels of teacher endorsement don’t necessarily mean these ideas are “bad,” and much of the disagreement is less about the desirability of general policies (e.g., new teacher evaluations) than the specifics (e.g., the measures that comprise those evaluations). In any case, it’s a somewhat awkward juxtaposition: A focus on “respecting and elevating the teaching profession” by means of policies that most teachers do not like.
Sometimes (albeit too infrequently) this tension is discussed meaningfully, other times it is obscured – e.g., by attempts to portray teachers’ disagreement as “union opposition.” But, as mentioned above, teachers are not a monolith and their opinions can and do change (see here). This is, in my view, a situation always worth monitoring, so I thought I’d take a look at a recent report from the organization Teach Plus, which presents data from a survey that they collected themselves.
Read More »
For many years, national survey and polling data have shown that Americans tend to like their own local schools, but are considerably less sanguine about the nation’s education system as a whole. This somewhat paradoxical finding – in which most people seem to think the problem is with “other people’s schools” – is difficult to interpret, especially since it seems to vary a bit when people are given basic information about schools, such as funding levels.
In any case, I couldn’t resist taking a very quick, superficial look at how people’s views of education vary by important characteristics, such as age and education. I used the General Social Survey (pooled 2006-2010), which queries respondents about their confidence in education, asking them to specify whether they have “hardly any,” “only some” or “a great deal” of confidence in the system.*
This question doesn’t differentiate explicitly between respondents’ local schools and the system as a whole, and respondents may consider different factors when assessing their confidence, but I think it’s a decent measure of their disposition toward the education system. Read More »
** Reprinted here in the Washington Post
I’ve written many times about how absolute performance levels – how highly students score – are not by themselves valid indicators of school quality, since, most basically, they don’t account for the fact that students enter the schooling system at different levels. One of the most blatant (and common) manifestations of this mistake is when people use NAEP results to determine the quality of a state’s schools.
For instance, you’ll often hear that Massachusetts has the “best” schools in the U.S. and Mississippi the “worst,” with both claims based solely on average scores on the NAEP (though, technically, Massachusetts public school students’ scores are statistically tied with at least one other state on two of the four main NAEP exams, while Mississippi’s rankings vary a bit by grade/subject, and its scores are also not statistically different from several other states’).
But we all know that these two states are very different in terms of basic characteristics such as income, parental education, etc. Any assessment of educational quality, whether at the state or local level, is necessarily complicated, and ignoring differences between students precludes any meaningful comparisons of school effectiveness. Schooling quality is important, but it cannot be assessed by sorting and ranking raw test scores in a spreadsheet.
Read More »
Every year, around this time, the College Board publicizes its SAT results, and hundreds of newspapers, blogs, and television stations run stories suggesting that trends in the aggregate scores are, by themselves, a meaningful indicator of U.S. school quality. They’re not.
Everyone knows that the vast majority of the students who take the SAT in a given year didn’t take the test the previous year – i.e., the data are cross-sectional. Everyone also knows that participation is voluntary (as is participation in the ACT test), and that the number of students taking the test has been increasing for many years and current test-takers have different measurable characteristics from their predecessors. That means we cannot use the raw results to draw strong conclusions about changes in the performance of the typical student, and certainly not about the effectiveness of schools, whether nationally or in a given state or district. This is common sense.
Unfortunately, the College Board plays a role in stoking the apparent confusion – or, at least, they could do much more to prevent it. Consider the headline of this year’s press release: Read More »
You don’t have to look very far to find very strong opinions about Race to the Top (RTTT), the U.S. Department of Education’s (USED) stimulus-funded state-level grant program (which has recently been joined by a district-level spinoff). There are those who think it is a smashing success, while others assert that it is a dismal failure. The truth, of course, is that these claims, particularly the extreme views on either side, are little more than speculation.*
To win the grants, states were strongly encouraged to make several different types of changes, such as adoption of new standards, the lifting/raising of charter school caps, the installation of new data systems and the implementation of brand new teacher evaluations. This means that any real evaluation of the program’s impact will take some years and will have to be multifaceted – that is, it is certain that the implementation/effects will vary not only by each of these components, but also between states.
In other words, the success or failure of RTTT is an empirical question, one that is still almost entirely open. But there is a silver lining here: USED is at least asking that question, in the form of a five-year, $19 million evaluation program, administered through the National Center for Education Evaluation and Regional Assistance, designed to assess the impact and implementation of various RTTT-fueled policy changes, as well as those of the controversial School Improvement Grants (SIGs). Read More »
One claim that gets tossed around a lot in education circles is that “the most effective teachers produce a year and a half of learning per year, while the least effective produce a half of a year of learning.”
This talking point is used all the time in advocacy materials and news articles. Its implications are pretty clear: Effective teachers can make all the difference, while ineffective teachers can do permanent damage.
As with most prepackaged talking points circulated in education debates, the “year and a half of learning” argument, when used without qualification, is both somewhat valid and somewhat misleading. So, seeing as it comes up so often, let’s very quickly identify its origins and what it means. Read More »