PISA And TIMSS: A Distinction Without A Difference?

Our guest author today is William Schmidt, a University Distinguished Professor and co-director of the Education Policy Center at Michigan State University. He is also a member of the Shanker Institute board of directors.

Every year or two, the mass media is full of stories on the latest iterations of one of the two major international large scale assessments, the Trends in International Mathematics and Science Study (TIMSS) and the Program for International Student Assessment (PISA). What perplexes many is that the results of these two tests -- both well-established and run by respectable, experienced organizations -- suggest different conclusions about the state of U.S. mathematics education. Generally speaking, U.S. students do better on the TIMSS and poorly on the PISA, relative to their peers in other nations. Depending on their personal preferences, policy advocates can simply choose whichever test result is convenient to press their argument, leaving the general public without clear guidance.

Now, in one sense, the differences between the tests are more apparent than real. One reason why the U.S. ranks better on the TIMSS than the PISA is that the two tests sample students from different sets of countries. The PISA has many more wealthy countries, whose students tend to do better – hence, the U.S.’s lower ranking. It turns out that when looking at only the countries that participated in both the TIMSS and the PISA we find similar country rankings. There are also some differences in statistical sampling, but these are fairly minor.

There is, however, a major distinction in what the two tests purport to measure: the TIMSS is focused on formal mathematical knowledge, whereas the PISA emphasizes the application of mathematics in the real world, what they term “mathematics literacy." As a consequence, it  would not be surprising to find major differences in how students perform, given that some countries’ teachers might concentrate on formal mathematics and others’ on applied mathematics.

But the real surprise is that these differences may not matter quite as much as we might suspect. For the first time, the most recent PISA test included questions asking students what sorts of mathematics they had been exposed to, whether formal mathematics, applied mathematics, or word problems. After analyzing the new PISA data, we discovered that the biggest predictor of how well a student did on the PISA test was exposure to formal mathematics. This is a notable finding, to be sure, since the PISA is designed to assess skill in applied rather than formal math. Exposure to applied mathematics has a weaker relationship to mathematics literacy, one with diminishing marginal returns. After a certain point, more work in applying math actually is related to lower levels of mathematics literacy.

Why these unexpected results? One reason might be that students need to be very comfortable with a mathematical concept before they can apply it in any meaningful way. One cannot calculate what percentage of one's income is going to housing without a clear understanding of how proportions work. It appears that a thorough grounding in formal mathematical concepts is a prerequisite both to understanding and to using mathematics.

- William Schmidt