Skip to content

Value-Added Data is Blowing Up in California, Thanks to the LA Times

August 16, 2010

NashvilleJeff is back from vacation, and it looks like just in time — over the weekend, the LA Times published an explosive story about teachers and teacher effectiveness, publicly naming “good” and “bad” Los Angeles teachers based on an analysis of value-added data that the Times put together itself (by hiring an outside economist/education researcher to do the actual value-added analysis):

Seeking to shed light on the problem, The Times obtained seven years of math and English test scores from the Los Angeles Unified School District and used the information to estimate the effectiveness of L.A. teachers — something the district could do but has not.

The Times used a statistical approach known as value-added analysis, which rates teachers based on their students’ progress on standardized tests from year to year. Each student’s performance is compared with his or her own in past years, which largely controls for outside influences often blamed for academic failure: poverty, prior learning and other factors.

Though controversial among teachers and others, the method has been increasingly embraced by education leaders and policymakers across the country, including the Obama administration.

In coming months, The Times will publish a series of articles and a database analyzing individual teachers’ effectiveness in the nation’s second-largest school district — the first time, experts say, such information has been made public anywhere in the country.

This article examines the performance of more than 6,000 third- through fifth-grade teachers for whom reliable data were available.

On the merits of using value-added data as an evaluation measure for teachers, the Times has this to say:

Many teachers and union leaders are skeptical of the value-added approach, saying standardized tests are flawed and do not capture the more intangible benefits of good instruction. Some also fear teachers will be fired based on the arcane calculations of statisticians who have never worked in a classroom.

The respected National Academy of Sciences weighed in last October, saying the approach was promising but should not be used in “high stakes” decisions — firing teachers, for instance — without more study.

No one suggests using value-added analysis as the sole measure of a teacher. Many experts recommend that it count for half or less of a teacher’s overall evaluation.

What is particularly interesting is that the Times sent reporters into some classrooms, and reported in the story about what they saw (though in remarkably abbreviated fashion).  One such set of teachers are set up in a best/worst dichotomy, with the lowest-scoring teacher being confronted with the results of the data analysis later:

In an interview days later, Smith acknowledged that he had struggled at times to control his class.

“Not every teacher works with every kid,” said Smith, 63, who started teaching in 1996. “Sometimes there are personality conflicts.”

On average, Smith’s students slide under his instruction, losing 14 percentile points in math during the school year relative to their peers districtwide, The Times found. Overall, he ranked among the least effective of the district’s elementary school teachers.

Told of The Times’ findings, Smith expressed mild surprise.

“Obviously what I need to do is to look at what I’m doing and take some steps to make sure something changes,” he said.

Most intriguing for me was the analysis of an “elite” teacher that everyone adores (from principal to parents to students) who is clearly committed, not only to being a good teacher, but to improving herself.  For example, the teacher, Ms. Caruso, was one of the first teachers in the district to gain National Board (NBPTS) certification.  However, the Times’ value-added analysis finds a history of poor student performance.  Most crucial, for me, is her reaction:

“For better or worse,” she said, “testing and teacher effectiveness are going to be linked.… If my student test scores show I’m an ineffective teacher, I’d like to know what contributes to it. What do I need to do to bring my average up?

Two things here: 1) She’s right that, for better or worse, evaluation based on student outcomes is the future (and, I predict, some sort of pay connected to student achievement), and 2) She’s identified precisely the right avenues for the use of the data: diagnosis and evaluation.  As a side note, I had a minor “Ha!” moment when I read that a teacher with NBPTS certification isn’t doing all that well on a student achievement evaluation.  You may recall my close look at National Board certification.

***************

The bottom line is this: We can either agree that value-added data has significant problems, but it’s a valuable tool we have that is being continuously improved, OR we can point to all the problems with the use of data like this (inconsistent controls for student/home/parent variables, inconsistent sample sizes, compounded effects of previous teachers, teaching to the test, wildly fluctuating scores year-by-year) and simply throw up our hands and give up.  Here’s the problem with that: I truly believe that we need some kind of objective measure to see whether teachers are having a positive effect on the learning of their students.

What we do with that measure is another issue entirely.

Michelle Rhee, over in D.C., has already used D.C.’s value-added data system to fire some teachers.  Others, like Ms. Caruso above, want to use the data as a diagnostic tool, to find out what’s going on with their students, and what can be improved in their teaching.  That’s the approach that I like the best: data should never be an enemy.  If we set up a system where teachers, administrators, and others fear openness and transparency, then we’re doing something wrong.  Rather, we should be looking at how best to provide teachers with meaningful data that they can work with to get better.  On this score, we’ve been failing massively in Tennessee.  Our TVAAS teacher reports are a joke.  Take a look at the sample reports for high school [pdf] and elementary/middle [doc] teachers (found here).  There is no differentiation by student, no indication of the areas that need improvement, no break-out of high-pass or high-fail subject areas (indicating that a lot of students in the class need help in a particular area, but not so much in another). Nothing helpful.  These are worthless pieces of paper.  There’s been a promise that, under Race to the Top, things will get better, but that remains to be seen.

I hesitate, lest I be labeled repetitious, but Rule #1 is: Data should not be used in a punitive manner.

Some of the more market-oriented educational researchers out there would disagree (I think).  These are the folks clamoring for more teacher firings, relaxation of the regulations and red tape required to become a teacher, and other market-driven reforms which would ease both the entry into and exit from the profession.  Many believe (and I don’t think I’m creating a straw man here) that we can hire and fire our way to success.  That is, with enough supply out there, we can spend our time weeding out the bad teachers, and hiring enough new ones so that, eventually, we keep the good teachers and are able to sort out the chaff.  This assumes, however, that there’s an infinite teacher pipeline and that room for experience/improvement is limited. Both of those assumptions fly in the face of what I know about the profession: 1) Even if it is easier to become a teacher (and I do support alternative licensure and the reform of traditional teacher preparation), there aren’t enough folks out there, at current salary/benefit levels, to fill the void of all the mediocre-to-bad teachers we would have to fire, and 2) Experience matters — teachers can get better if you give them the support and opportunities that they need.

So… That was quite a digression, and I feel myself wandering a bit, so I’ll call a halt to it here.  However, let me make one final point: While data shouldn’t be used punitively, there is a place for data in tenure/firing decisions — it just can’t be the be-all, end-all crux of the matter.  Given chances to improve, diagnostics about what should be worked on, and the support and resources to make changes, teachers who can’t (or won’t) improve can’t just keep their jobs forever.  In that case, a teacher who has been given meaningful chances (plural) to improve, as well as the support and resources to do so, and hasn’t, needs to try a different job.  But that kind of strategy is a far cry from simply firing a teacher based on a year or two of bad test scores.

This is a developing story across the nation, so look for the topic to reappear from time to time.  To read more reactions on the subject (h/t Alexander Russo and Mike Klonsky), check out the United Teachers of Los Angeles, Larry Ferlazzo, Ed Week, and Bruce Baker (SchoolFinance101).

P.S. On the journalistic ethics of outing teachers identified as “poor” based on a single academic’s analysis of a limited set of data, well, that’s another story… (hint: I’m not terribly impressed).

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: