Lost In Citation

Posted by on July 31, 2014

The so-called Vergara trial in California, in which the state’s tenure and layoff statutes were deemed unconstitutional, already has its first “spin-off,” this time in New York, where a newly-formed organization, the Partnership for Educational Justice (PEJ), is among the organizations and entities spearheading the effort.

Upon first visiting PEJ’s new website, I was immediately (and predictably) drawn to the “Research” tab. It contains five statements (which, I guess, PEJ would characterize as “facts”). Each argument is presented in the most accessible form possible, typically accompanied by one citation (or two at most). I assume that the presentation of evidence in the actual trial will be a lot more thorough than that offered on this webpage, which seems geared toward the public rather than the more extensive evidentiary requirements of the courtroom (also see Bruce Baker’s comments on many of these same issues surrounding the New York situation).

That said, I thought it might be useful to review the basic arguments and evidence PEJ presents, not really in the context of whether they will “work” in the lawsuit (a judgment I am unqualified to make), but rather because they’re very common, and also because it’s been my observation that advocates, on both “sides” of the education debate, tend to be fairly good at using data and research to describe problems and/or situations, yet sometimes fall a bit short when it comes to evidence-based discussions of what to do about them (including the essential task of acknowledging when the evidence is still undeveloped). PEJ’s five bullet points, discussed below, are pretty good examples of what I mean. Read More »


Do Students Learn More When Their Teachers Work Together?

Posted by on July 17, 2014

This is the second post in a series on “The Social Side Of Reform”, exploring the idea that relationships, social capital, and social networks matter in lasting, systemic educational improvement. For more on this series, click here.

Debates about how to improve educational outcomes for students often involve two ‘camps’: Those who focus on the impact of “in-school factors” on student achievement; and those who focus on “out-of-school factors.” There are many in-school factors discussed but improving the quality of individual teachers (or teachers’ human capital) is almost always touted as the main strategy for school improvement. Out-of-school factors are also numerous but proponents of this view tend toward addressing broad systemic problems such as poverty and inequality.

Social capital — the idea that relationships have value, that social ties provide access to important resources like knowledge and support, and that a group’s performance can often exceed that of the sum of its members — is something that rarely makes it into the conversation. But why does social capital matter?

Research suggests that teachers’ social capital may be just as important to student learning as their human capital. In fact, some studies indicate that if school improvement policies addressed teachers’ human and social capital simultaneously, they would go a long way toward mitigating the effects of poverty on student outcomes. Sounds good, right? The problem is: Current policy does not resemble this approach. Researchers, commentators and practitioners have shown and lamented that many of the strategies leveraged to increase teachers’ human capital often do so at the expense of eroding social capital in our schools. In other words, these approaches are moving us one step forward and two steps back. Read More »


The Importance Of Relationships In Educational Reform

Posted by on July 7, 2014

* Reprinted here in the Washington Post

This is the first post in a series on “The Social Side Of Reform”, exploring the idea that relationships, social capital, and social networks matter in lasting, systemic educational improvement. For more on this series, click here.

Our guest authors today are Kara S. Finnigan, Associate Professor at the Warner School of Education at the University of Rochester, and Alan J. Daly, Professor and Chair of Education Studies at the University of California San Diego. Finnigan and Daly have published numerous articles on social network analysis in education in academic and practitioner journals, and recently co-edited Using Research Evidence in Education: From the Schoolhouse Door to Capitol Hill (Springer, 2014), which explores the use and diffusion of different types of evidence across levels of the educational system.

There are many reforms out there; what if these ideas are not working as well as they could because educators are simply not communicating or building meaningful relationships with each other or maybe the conditions in which they do their work do not support productive interactions?  These are important issues to understand and our research, some of which we highlight in this post, underscores the importance of the relational element in reform.  To further explore the social side of the change equation, we draw on social network research as a way to highlight the importance of relationships as conduits through which valued resources flow and can bring about system-wide change.

A few years ago Arne Duncan noted that “[NCLB] has created a thousand ways for schools to fail and very few ways to help them succeed.”  We think that may have to do with the over reliance on technical fixes, prescriptive approaches and the scant attention to the context — particularly the social context — in which reforms are implemented.  But what would things look like if we took a more relational approach to educational improvement? Read More »


Teachers And Education Reform, On A Need To Know Basis

Posted by on July 1, 2014

A couple of weeks ago, the website Vox.com published an article entitled, “11 facts about U.S. teachers and schools that put the education reform debate in context.” The article, in the wake of the Vergara decision, is supposed to provide readers with the “basic facts” about the current education reform environment, with a particular emphasis on teachers. Most of the 11 facts are based on descriptive statistics.

Vox advertises itself as a source of accessible, essential, summary information — what you “need to know” — for people interested in a topic but not necessarily well-versed in it. Right off the bat, let me say that this is an extraordinarily difficult task, and in constructing lists such as this one, there’s no way to please everyone (I’ve read a couple of Vox’s education articles and they were okay).

That said, someone sent me this particular list, and it’s pretty good overall, especially since it does not reflect overt advocacy for given policy positions, as so many of these types of lists do. But I was compelled to comment on it. I want to say that I did this to make some lofty point about the strengths and weaknesses of data and statistics packaged for consumption by the general public. It would, however, be more accurate to say that I started doing it and just couldn’t stop. In any case, here’s a little supplemental discussion of each of the 11 items: Read More »


A Few More Points About Charter Schools And Extended Time

Posted by on June 9, 2014

A few weeks ago, I wrote a post that made a fairly simple point about the practice of expressing estimated charter effects on test scores as “days of additional learning”: Among the handful of states, districts, and multi-site operators that consistently have been shown to have a positive effect on testing outcomes, might not those “days of learning” be explained, at least in part, by the fact that they actually do offer additional days of learning, in the form of much longer school days and years?

That is, there is a small group of charter models/chains that seem to get good results. There are many intangible factors that make a school effective, but to the degree we can chalk this up to concrete practices or policies, additional time may be the most compelling possibility. Although it’s true that school time must be used wisely, it’s difficult to believe that the sheer amount of extra time that the flagship chains offer would not improve testing performance substantially.

To their credit, many charter advocates do acknowledge the potentially crucial role of extended time in explaining their success stories. And the research, tentative though it still is, is rather promising. Nevertheless, there are a few important points that bear repeating when it comes to the idea of massive amounts of additional time, particularly given the fact that there is a push to get regular public schools to adopt the practice. Read More »


The Proportionality Principle In Teacher Evaluations

Posted by on May 27, 2014

Our guest author today is Cory Koedel, Assistant Professor of Economics at the University of Missouri.

In a 2012 post on this blog, Dr. Di Carlo reviewed an article that I coauthored with colleagues Mark Ehlert, Eric Parsons and Michael Podgursky. The initial article (full version here, or for a shorter, less-technical version, see here) argues for the policy value of growth models that are designed to force comparisons to be between schools and teachers in observationally-similar circumstances.

The discussion is couched within the context of achieving three key policy objectives that we associate with the adoption of more-rigorous educational evaluation systems: (1) improving system-wide instruction by providing useful performance signals to schools and teachers; (2) eliciting optimal effort from school personnel; and (3) ensuring that current labor-market inequities between advantaged and disadvantaged schools are not exacerbated by the introduction of the new systems.

We argue that a model that forces comparisons to be between equally-circumstanced schools and teachers – which we describe as a “proportional” model – is best-suited to achieve these policy objectives. The conceptual appeal of the proportional approach is that it fully levels the playing field between high- and low-poverty schools. In contrast, some other growth models have been shown to produce estimates that are consistently associated with the characteristics of students being served (e.g., Student Growth Percentiles). Read More »


We Can’t Just Raise Expectations

Posted by on April 30, 2014

* Reprinted here in the Washington Post

What exactly is “a culture of high expectations” and how is it created? In fact, what are expectations? I ask these questions because I hear this catchphrase a lot, but it doesn’t seem like the real barriers to developing such a culture are well understood. If we are serious about raising expectations for all learners, we need to think seriously about what expectations are, how they work and what it might take to create environments that equalize high expectations for what students can achieve.

In this post I explain why I think the idea of “raising expectations” — when used carelessly and as a slogan — is meaningless. Expectations are not test-scores. They are related to standards but are not the same thing. Expectations are a complex and unobservable construct — succinctly, they are unconscious anticipations of performance. Changing expectations for competence is not easy, but it is possible — I get at some of that later.

Certain conditions, however, need to be in place — e.g., a broad conceptualization of ability, a cooperative environment etc. It is unclear that these conditions are present in many of our schools. In fact, many are worried that the opposite is happening. The research and theory I examine here suggest that extreme standardization and competition are incompatible with equalizing expectations in the classroom. They suggest, rather, that current reforms might be making it more difficult to develop and sustain high expectations for all students, and to create classrooms where all students experience similar opportunities to learn. Read More »


“Show Me What Democracy Looks Like”

Posted by on April 29, 2014

Our guest author today is John McCrann, a Math teacher and experiential educator at Harvest Collegiate High School in New York City. John is a member of the America Achieves Fellowship, Youth Opportunities Program, and Teacher Leader Study Group. He tweets at @JohnTroutMcCran.

New York City’s third through eighth graders are in the middle of state tests, and many of our city’s citizens have taken strong positions on the value (or lack thereof) of these assessments.  The protests, arguments and activism surrounding these tests remind me of a day when I was a substitute civics teacher during summer school.  “I need help,” Charlotte said as she approached my desk, “what is democracy?”

On that day, my mind flashed to a scene I witnessed outside the White House in the spring of 2003.  On one side of the fence, protestors shouted: “Show me what democracy looks like! This is what democracy looks like!”  On the other side worked an administration who had invaded another country in an effort to “expand democracy.” Passionate, bright people on both sides of that fence believed in the idea that Charlotte was asking about, but came to very different conclusions about how to enact the concept.  Read More »


The Middle Ground Between Opt Out And All In

Posted by on April 11, 2014

A couple of weeks ago, Michelle Rhee published an op-ed in the Washington Post speaking out against the so-called “opt out movement,” which encourages parents to refuse to let their children take standardized tests.

Personally, I oppose the “opt-out” phenomenon, but I also think it would be a mistake not to pay attention to its proponents’ fundamental issue – that standardized tests are potentially being misused and/or overused. This concern is legitimate and important. My sense is that “opting out” reflects a rather extreme version of this mindset, a belief that we cannot right the ship – i.e., we have gone so far and moved so carelessly with test-based accountability that there is no real hope that it can or will be fixed. This strikes me as a severe overreaction, but I understand the sentiment.

That said, while most of Ms. Rhee’s op-ed is the standard, reasonable fare, some of it is also laced with precisely the kind of misconceptions that contribute to the apprehensions not only of anti-testing advocates, but also among those of us who occupy a middle ground – i.e., favor some test-based accountability, but are worried about getting it right. Read More »


SIG And The High Price Of Cheap Evidence

Posted by on March 11, 2014

A few months ago, the U.S. Department of Education (USED) released the latest data from schools that received grants via the School Improvement (SIG) program. These data — consisting solely of changes in proficiency rates — were widely reported as an indication of “disappointing” or “mixed” results. Some even went as far as proclaiming the program a complete failure.

Once again, I have to point out that this breaks almost every rule of testing data interpretation and policy analysis. I’m not going to repeat the arguments about why changes in cross-sectional proficiency rates are not policy evidence (see our posts here, here and here, or examples from the research literature here, here and here). Suffice it to say that the changes themselves are not even particularly good indicators of whether students’ test-based performance in these schools actually improved, to say nothing of whether it was the SIG grants that were responsible for the changes. There’s more to policy analysis than subtraction.

So, in some respects, I would like to come to the defense of Secretary Arne Duncan and USED right now – not because I’m a big fan of the SIG program (I’m ambivalent at best), but rather because I believe in strong, patient policy evaluation, and these proficiency rate changes are virtually meaningless. Unfortunately, however, USED was the first to portray, albeit very cautiously, rate changes as evidence of SIG’s impact. In doing so, they provided a very effective example of why relying on bad evidence is a bad idea even if it supports your desired conclusions. Read More »


In Education Policy, Good Things Come In Small Packages

Posted by on March 7, 2014

A recent report from the U.S. Department of Education presented a summary of three recent studies of the differences in the effectiveness of teaching provided advantaged and disadvantaged students (with the former defined in terms of value-added scores, and the latter in terms of subsidized lunch eligibility). The brief characterizes the results of these reports in an accessible manner – that the difference in estimated teaching effectiveness between advantaged and disadvantaged students varied quite widely between districts, but overall is about four percent of the achievement gap in reading and 2-3 percent in math.

Some observers were not impressed. They wondered why so-called reformers are alienating teachers and hurting students in order to address a mere 2-4 percent improvement in the achievement gap.

Just to be clear, the 2-4 percent figures describe the gap (and remember that it varies). Whether it can be narrowed or closed – e.g., by improving working conditions or offering incentives or some other means – is a separate issue. Nevertheless, let’s put aside all the substantive aspects surrounding these studies, and the issue of the distribution of teacher quality, and discuss this 2-4 percent thing, as it illustrates what I believe is the among the most important tensions underlying education policy today: Our collective failure to have a reasonable debate about expectations and the power of education policy. Read More »


Revisiting The Widget Effect

Posted by on March 4, 2014

In 2009, The New Teacher Project (TNTP) released a report called “The Widget Effect.” You would be hard-pressed to find too many more recent publications from an advocacy group that had a larger influence on education policy and the debate surrounding it. To this day, the report is mentioned regularly by advocates and policy makers.

The primary argument of the report was that teacher performance “is not measured, recorded, or used to inform decision making in any meaningful way.” More specifically, the report shows that most teachers received “satisfactory” or equivalent ratings, and that evaluations were not tied to most personnel decisions (e.g., compensation, layoffs, etc.). From these findings and arguments comes the catchy title – a “widget” is a fictional product commonly used in situations (e.g., economics classes) where the product doesn’t matter. Thus, treating teachers like widgets means that we treat them all as if they’re the same.

Given the influence of “The Widget Effect,” as well as how different the teacher evaluation landscape is now compared to when it was released, I decided to read it closely. Having done so, I think it’s worth discussing a few points about the report. Read More »


Matching Up Teacher Value-Added Between Different Tests

Posted by on February 11, 2014

The U.S. Department of Education has released a very short, readable report on the comparability of value-added estimates using two different tests in Indiana – one of them norm-referenced (the Measures of Academic Progress test, or MAP), and the other criterion-referenced (the Indiana Statewide Testing for Educational Progress Plus, or ISTEP+, which is also the state’s official test for NCLB purposes).

The research design here is straightforward – fourth and fifth grade students in 46 schools across 10 districts in Indiana took both tests, their teachers’ value-added scores were calculated, and the scores were compared. Since both sets of scores were based on the same students and teachers, this is allows a direct comparison of how teachers’ value-added estimates compare between these two tests. The results are not surprising, and they square with similar prior studies (see here, here, here, for example): The estimates based on the two tests are moderately correlated. Depending on the grade/subject, they are between 0.4 and 0.7. If you’re not used to interpreting correlation coefficients, consider that only around one-third of teachers were in the same quintile (fifth) on both tests, and another 40 or so percent were one quintile higher or lower. So, most teachers were within a quartile, about a quarter of teachers moved two or more quintiles, and a small percentage moved from top to bottom or vice-versa.

Although, as mentioned above, these findings are in line with prior research, it is worth remembering why this “instability” occurs (and what can be done about it). Read More »


Is Selective Admission A School Improvement Plan?

Posted by on January 23, 2014

The Washington Post reports that parents and alumni of D.C.’s Dunbar High School have quietly been putting together a proposal to revitalize what the article calls “one of the District’s worst performing schools.”

Those behind the proposal are not ready to speak about it publicly, and details are still very thin, but the Post article reports that it calls for greater flexibility in hiring, spending and other core policies. Moreover, the core of the plan – or at least its most drastic element – is to make Dunbar a selective high school, to which students must apply and be accepted, presumably based on testing results and other performance indicators (the story characterizes the proposal as a whole with the term “autonomy”). I will offer no opinion as to whether this conversion, if it is indeed submitted to the District for consideration, is a good idea. That will be up to administrators, teachers, parents, and other stakeholders.

I am, however, a bit struck by two interrelated aspects of this story. The first is the unquestioned characterization of Dunbar as a “low performing” or “struggling” school. This fateful label appears to be based mostly on the school’s proficiency rates, which are indeed dismally low – 20 percent in math and 29 percent in reading. Read More »


Extended School Time Proposals And Charter Schools

Posted by on January 22, 2014

One of the (many) education reform proposals that has received national attention over the past few years is “extended learning time” – that is, expanding the day and/or year to give students more time in school.

Although how schools use the time they have with students, of course, is not necessarily more or less important than how much time they have with those students, the proposal to expand the school day/year may have merit, particularly for schools and districts serving larger proportions of students who need to catch up. I have noticed that one of the motivations for the extended time push is the (correct) observation that the charter school models that have proven effective (at least by the standard of test score gains) utilize extended time.

On the one hand, this is a good example of what many (including myself) have long advocated – that the handful of successful charter school models can potentially provide a great deal of guidance for all schools, regardless of their governance structure. On the other hand, it is also important to bear in mind that many of the high-profile charter chains that receive national attention don’t just expand their school years by a few days or even a few weeks, as has been proposed in several states. In many cases, they extend it by months. Read More »


Disclaimer

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the shankerblog.org may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.

Banner image adapted from 1975 photograph by Jennie Shanker, daughter of Albert Shanker.