Research And Policy On Paying Teachers For Advanced Degrees

There are three general factors that determine most public school teachers’ base salaries (which are usually laid out in a table called a salary schedule). The first is where they teach; districts vary widely in how much they pay. The second factor is experience. Salary schedules normally grant teachers “step raises” or “increments” each year they remain in the district, though these raises end at some point (when teachers reach the “top step”).

The third typical factor that determines teacher salary is their level of education. Usually, teachers receive a permanent raise for acquiring additional education beyond their bachelor’s degree. Most commonly, this means a master’s degree, which roughly half of teachers have earned (though most districts award raises for accumulating a certain number of credits towards a master’s and/or a Ph.D., and for getting a Ph.D.). The raise for receiving a master’s degree varies, but just to give an idea, it is, on average, about 10 percent over the base salary of bachelor’s-only teachers.

This practice of awarding raises for teachers who earn master’s degrees has come under tremendous fire in recent years. The basic argument is that these raises are expensive, but that having a master’s degree is not associated with test-based effectiveness (i.e., is not correlated with scores from value-added models of teachers’ estimated impact on their students’ testing performance). Many advocates argue that states and districts should simply cease giving teachers raises for advanced degrees, since, they say, it makes no sense to pay teachers for a credential that is not associated with higher performance. North Carolina, in fact, passed a law last year ending these raises, and there is talk of doing the same elsewhere.

As we'll see, the argument that we should not be granting teachers raises for advanced degrees is not without merit. It is also, however, based on a generalization that sometimes ignores some key variation by degree type and context, and the potential for harnessing these findings in policy.

Before discussing that, however, it bears reiterating that virtually all of the arguments to eliminate raises for advanced educational degrees are based on their association (or lack thereof) with test-based productivity measures such as value-added. Certainly, there is a strong case for assuming that value-added estimates do provide some signal of “true" teacher effectiveness, and for acknowledging that they should play a significant role in this discussion. But they are not the only measure that is available or relevant, and paying teachers for advanced degrees can also have other benefits, such as improving recruitment and retention. In addition, just from the perspective of common sense, it seems ill-advised simply to eliminate incentives for teachers to invest in their education.

Perhaps there is some empirically-backed middle ground here, even if we adopt the purely test-based approach assessing the value of awarding raises for advanced degrees. In general, it is true, as argued by advocates of eliminating master’s raises, that this literature overwhelmingly finds little or no discernible relationship between teachers’ test-based effectiveness and whether or not they have a master’s degree (see the review of the recent literature in Harris and Sass 2007). A few studies even find small negative associations.

There is, however, some evidence that this overall characterization of the literature misses important underlying variation by the type of degree and subject/grade being taught. Specifically, several studies find modest positive associations between advanced degrees and math value-added among elementary school teachers (see Betts et al. 2003; Dee 2004; Nye et al. 2004), and in math among high school teachers (Clotfelter et al. 2010) and special education teachers (Feng and Sass 2010). To be clear, such relationships are not found in all analyses, but there is arguably enough evidence to rate this as a potentially important qualification (one which many skeptics of giving raises for master's degrees acknowledge).

(There is also some work that lends itself to similar conclusions about math and/or science effectiveness when looking at certification [e.g., Goldhaber and Brewer 2010], undergraduate coursework [Monk 1996Boyd et al. 2009Goldhaber and Brewer 1996], and professional development [Harris and Sass 2007], as well as certification in special education [Feng and Sass 2010].)

So, perhaps, simply completing graduate coursework isn’t associated with performance (by this test-based definition), but the fact there is some evidence of improvement in math (and perhaps science) specifically may be telling us something about the importance of learning the content one is teaching. This is not to say that one need make the false choice between pedagogy and content, but it is certainly wise to consider the importance of the latter when compensating teachers. It's therefore worth asking whether we find evidence of a difference between a “generalist” graduate degree (i.e., in education) and a degree in a specific field, such as mathematics (or, maybe, a degree that emphasizes content).

Checking this out is difficult, given the availability of data on the specific field in which degrees are earned (not to mention the scarcity of tests beyond math and reading), but a couple of studies suggest that students of secondary school math teachers who earn a master’s in math (or math education) achieve better testing results than students of their colleagues with master's degrees in non-math-related subjects (Goldhaber and Brewer 1996; 2000; also see Rowan et al. 1997).

Granted, the research on field-specific advanced degrees is neither voluminous nor consistent (for example, Rowan, Correnti and Miller [2002] find no relationship between field-specific bachelor/master's degrees and effectiveness). It does, however, once again suggest that some kinds of degrees, specifically those in or related to the fields teachers are teaching, may in fact be related to performance (at least in math and science, and at least to the degree that test-based productivity measures capture such performance). This squares with common sense: Put simply, content knowledge matters.

From this perspective, and given the scarcity of consistent evidence linking value-added scores with other tangible “paper credentials” beyond past scores (not to mention the potential benefits of these raises for other outcomes, such as retention, and the attractiveness of encouraging teachers to invest in their skills and knowledge), eliminating raises for advanced degrees seems like it may be throwing out the baby with the bathwater.

One can easily imagine, for example, experimenting with structuring salary schedules in a manner that incentivizes teachers’ getting advanced degrees related to or in their fields. Most simply, raises for secondary school teachers might depend on their pursuing advanced degrees in the specific fields they teach (e.g., math or math education, science or science education, etc.), while elementary school teachers might be encouraged to get degrees in or related to one of a small set of specific fields, such as mathematics or beginning reading instruction.

Many education schools already offer content-focused programs, and compensation reform could spur additional change. It might encourage education schools to develop more and better specific, content-focused programs, perhaps in partnerships with other departments (again, many education schools already have these types of programs). They may also lead to changes in the coursework required for teachers' initial placements.

Obviously, any attempt to reconceptualize the “degree lane” component of salary schedules in this manner would have to contend with a number of issues (e.g., how to deal with teachers of subjects such as art and physical education), and, needless to say, districts would have to honor their commitments to teachers who have started or completed their degrees. In addition, there would have to close monitoring of how the policy affected other outcomes (including retention and whether or not it served to discourage teachers’ investment in higher education).

But such an attempt would seem worthwhile at this point, at least on a trial basis. One important first step, though, would be to track this information about degree fields more aggressively, and conduct more research. Another would be to ease back on the rhetorical throttle when it comes to advocating for the wholesale elimination of incentives for teachers to pursue higher education just because the degrees are not correlated consistently with test-based productivity measures.

- Matt Di Carlo

Permalink

The component here that is important is how are we measuring effectiveness. Something you discussed quite well in your recent VAM posts. If we say that advanced degrees are not worthwhile just because they don't increase test scores that is one small data point.

If this incentive for increasing knowledge and skill is going to be removed, then others need to be implemented. There needs to be explicit career paths for teachers to increase their salaries that are not necessarily administrative and don't always require them to totally leave the classroom either.

Permalink

Nice post Matthew.

While researchers measure experience as a continuous variable (and often find stronger results), they usually measure education as a dummy -- bachelor's or masters -- decreasing the likelihood of finding a significant result.

Reformers rarely note people with masters in other fields are paid a premium for their grad degrees (and those degrees are rarely questioned and subjected to "rigorous" statistical analysis). On average across the economy, those with master's degrees earn about $12k more than those with bachelor's. Certainly school districts and unions are not alone in paying additional money for additional knowledge. (http://www.usnews.com/education/best-graduate-schools/articles/2012/06/…)

Surely there is something akin to value added in the business world so we can learn if MBA's outperform non-MBA's?