Data-Driven Instruction Can’t Work If Instructors Don’t Use The Data

Posted by on July 11, 2013

In education today, data, particularly testing data, are everywhere. One of many potentially valuable uses of these data is helping teachers improve instruction – e.g., identifying students’ strengths and weaknesses, etc. Of course, this positive impact depends on the quality of the data and how it is presented to educators, among other factors. But there’s an even more basic requirement – teachers actually have to use it.

In an article published in the latest issue of the journal Education Finance and Policy, economist John Tyler takes a thorough look at teachers’ use of an online data system in a mid-sized urban district between 2008 and 2010. A few years prior, this district invested heavily in benchmark formative assessments (four per year) for students in grades 3-8, and an online “dashboard” system to go along with them. The assessments’ results are fed into the system in a timely manner. The basic idea is to give these teachers a continual stream of information, past and present, about their students’ performance.

Tyler uses weblogs from the district, as well as focus groups with teachers, to examine the extent and nature of teachers’ data usage (as well as a few other things, such as the relationship between usage and value-added). What he finds is not particularly heartening. In short, teachers didn’t really use the data.

For example, in 2009 (three years after the system launched), the typical teacher logged on to the system about once a week (twice if you eliminate the teachers who didn’t log on at all), or 33 times over the course of year. The average weekly time spent using the dashboard was roughly 10 minutes (or about a half hour for teachers who actually used it). About half this time was spent viewing pages that didn’t contain any student data, most notably notably lesson plans and login/navigational screens. The median annual time teachers spent viewing the actual student data was a little over an hour across the entire year. One in three teachers logged on to the system only once.

Tyler’s focus groups with the teachers also yielded interesting results. He explains:

Teachers in these meetings were quite candid in expressing their opinions about and experiences with Dashboard. One factor that arose with relative frequency was an expressed concern that the Benchmark tests lacked some validity because they often tested material the teachers had yet to cover in class. A second factor that was supported across the focus group discussions was a perceived lack of instructional time to act on information a teacher might gain from Dashboard data. In particular, teachers expressed frustration with the lack of time to re-teach topics and concepts to students that had been identified on Dashboard as in need of re-teaching. A third concern was a lack of training in how to use Dashboard effectively and efficiently. A fourth common barrier to Dashboard use cited by teachers was a lack of time for Dashboard-related data analysis.

This reveals the complicated web of factors that must be in place if teachers are to get anything out of data systems like this one (also see this report on a similar pilot program in Pennsylvania). Teachers need time and training, not only to use the system itself, but also to act on the recommendations (e.g. “reteach”). The assessments must be “in sync” with the curriculum. And, of course, teachers need to believe that the information is going to be useful to their practice, or they are unlikely to use it.

Many districts are investing in these types of systems, and it’s a sure bet that more will do so going forward. “Data-driven instruction” is a wonderful-sounding, catchy term, but without extensive preparation and thoughtful, patient implementation, it might not work out as anticipated.

- Matt Di Carlo

2 Comments posted so far

  • Assistance from an instructional coach or mentor can be invaluable in helping teachers to sift through such student data, to use the data to identify educators’ strengths and weaknesses, and to strengthen instructional practice. Such data needn’t come from purely “high stakes” assessments and such opportunities for educators to reflect and self assess should not come solely through high-stakes evaluation systems. There should be regular opportunities for teachers to reflect on their instructional practice and to review student work (including test scores) over the course of the school year. For beginning (and struggling) educators, these opportunities must come much more frequently than the design of new evaluation systems typically provides for.

    Comment by Liam Goldrick
    July 12, 2013 at 1:33 PM
  • The study found no relationship between use of the dashboard data and student gains in achievement. Perhaps the teachers were simply making a rational choice not to use an ineffective program.

    Comment by Ray
    July 12, 2013 at 3:19 PM

Sorry, the comment form is closed at this time.


This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.

Banner image adapted from 1975 photograph by Jennie Shanker, daughter of Albert Shanker.