Wednesday, November 30, 2011 at 2:00 PM
When the new Ohio Teacher Evaluation System, OTES, was presented to the Ohio State Board of Education a few weeks ago, board member Mary Rose Oakar became concerned. Half of the new evaluations must be based on student performance, things like standardized test scores and a measure of how much students learn, known as “value-added.” Oakar worried about the accuracy of the system. After all, she said, most people – including principals and the teachers getting evaluated - don’t understand what value-added means.
Education folks borrowed the term "value-added" from the business world. Basically, it means the amount of value added to a good or service in each stage of production. For example, if a company makes bicycles, at one stage they add the wheels. In that stage the “value added” would be the (very important) wheels on the bicycle.
Similarly, the Ohio Department of Education tries to measure just how much value is added to student learning in a given year.
In practice, it’s a tool education officials use to determine if students learned something, even though they may have done poorly on a given exam.
In Ohio, school districts and charter schools’ value-added scores are used as one of the metrics in the state report cards. The value added sctores will also play a big role in new teacher evaluations.
But measuring how much value is added to student growth is tricky.
Basically, value-added measures compare the mean of student scores for a year to the mean of those same students’ scores in the same subject the previous year. For example, researchers collect as many fourth grade math test scores as possible and compare them to the same students’ math tests taken in the third grade.
Matt Cohen, the chief research officer at the Ohio Department of Education says in order to get an accurate idea of how much a student learned, the department needs a lot of consistent data from many tests over a long period of time. But in reality, the department has “a limited amount of info on students,” since students aren’t tested every day in every subject. The more tests that are available for a given school or grade, the more accurate the value-added score for that group. And you need just as much data from the previous year in order to compare the two tests results.
On top of that, Cohen says “people tend to ascribe a whole lot more precision to these assessments” than is really possible. He says all scores have a margin of error; “maybe (students) had a bad day, maybe they guessed well on a couple of things,” which means the scores used to calculate value-added are from the beginning imperfect.
This is the real tough part of creating a value-added measure: trying to account for all the gaps and mistakes in the raw data.
That’s why value-added scores are translated onto a plus or minus scale of two points.
Cohen says a zero on the scale means, “you’re progressing just fine, thank you, and that’s good.” Basically, a zero means that school’s students learned one year’s worth of material in a given year.
Anything above a zero means that a school “significantly exceeded” expectations by teaching its students more than a year’s worth of material. Anything below a zero means students at that school did not gain a year’s worth of knowledge.
Cohen says the Department of Education isn’t concerned about the schools that land in the middle of the range. It’s the outliers, the really high and very low performing schools, that catch the department’s attention. Getting a high value-added score can help bump public school districts up in their report card ratings, while scoring low may earn them a low grade.
Although Cohen says getting a zero is “just fine,” he adds that for many districts, it’s really not enough. If students are behind grade level on standardized test scores for that year, having added only a year’s worth of knowledge may not be cutting it. For example, a third grader who scores like a second grader on a standardized math test may have learned a year’s worth of math, but should really be learning more than that to catch up to where he or she is supposed to be according to state expectations.
Cohen says people often ask him if their students gained more than a year’s worth of knowledge. His answer: “Based on two tests? You’ve got to be kidding. So we make the most of the limited amount of the data we have and based on that it’s a very precise system and it tells us what it tells us. “
Now Cohen and his team are working out how value-added measures will be incorporated into the new teacher evaluation system. That’s a daunting task, since value-added is likely to play a big role in the future of hiring, firing and promoting Ohio’s teachers.