Terri Campbell, a fellow Memphis City Schools teacher, writes in her post, "A Tennessee Teacher's Evaluation Success Story" that she no longer “fears multiple levels of student data.” Unfortunately, many of the first-year teachers I’ve spoken with feel the opposite.
I’m a first-year math teacher in Memphis, TN. One of my students—we’ll call her “Regina”—scored consistently in the 30 percent range on assessments in the early part of this year. She was failing. When Regina’s scores began inching up into the 50s and 60s, I was thrilled. Clearly, between Regina’s efforts as a student and mine as a teacher, she was making progress.
That’s what a value-added metric should help teachers do: capture student growth. By design, a value-added metric is a statistical analysis of achievement data that measures a student’s progress over time, with the goal of isolating growth attributable to a single teacher. As a new teacher just learning my craft, I want to be able to use the information, along with other evaluation measures, to improve my classroom practice—to learn what I’m doing well, and what I need to work on.
It’s why I firmly believe that schools are right to use value-added metrics (information on student growth) as one measure to evaluate teachers’ effectiveness.
In Memphis City Schools’ evaluation system, 35 percent of my teacher evaluation this year will come from value-added data. But precisely because this data is being used to measure how well we’re doing our jobs, teachers, especially younger teachers, also need to understand how it works, and how we can use it to improve our craft.
Unfortunately, new teachers are often frustrated by their lack of understanding about this valuable source of student data, and their knowledge of how to use this tool to improve their instructional practices. As the third quarter of our first year draws to a close, not one of the eight first-year teachers across Memphis with whom I’ve spoken has seen their students’ value-added data.
Furthermore, many of my colleagues teach non-tested subjects and do not have their own data.
Currently in Tennessee, only 45 percent of teachers have individual value-added data, and this reflects nationwide trends. This means that teachers in subjects such as Spanish, Health, and the Arts are evaluated on school-level data—data that measures other teachers’ effectiveness, but not their own. Given the push to make student data a large part of teacher evaluations, how can policymakers use the value-added metric to enhance teacher performance and satisfaction?
As a first year teacher, here’s what I think needs to happen to better enable new teachers to improve our teaching:
First, district leaders and administrators in Memphis, and elsewhere, can start by ensuring that new teachers are educated immediately upon entering the profession about how our value-added scores are calculated, and how we can use this information in the classroom.
Having taken a college course on value-added teacher assessment, I feel more comfortable than many of my colleagues with this measure being used to evaluate my work. Not only will proper education help new teachers become more accustomed to value-added data and hopefully ease some of our concerns about the measure, but it will also help us better understand how to use this information to improve our classroom instruction.
Second, we need to identify means of assessing student growth in non-core subjects too.
A group of arts educators in Memphis have developed an alternative portfolio-based system for measuring student growth in their subjects. We need more initiatives like this before value-added data can be used as a way to measure teacher performance across all subjects.
At present, value-added data is one of the most effective measures we have to evaluate growth attributable to the teacher in the classroom. I want that information, because I want to be able to celebrate Regina’s successes, learn how to maximize her growth, address barriers when she’s not progressing, and use her data to improve my own teaching.
As Terri Campbell concludes, “It feels much better to say, ‘I set out to excel, and I did.’ ” Terri started the year with apprehension, but ultimately felt that the evaluation tool supported, rather than hindered, her efforts improve her practice.
To help those who are new to teaching feel supported, there are measures that should be taken to increase comfort with value-added data as a component of the evaluation system, and to allow us to use this valuable tool to become better teachers.