I’m catching up on my Marginal Revolution backlog. Tyler Cowen highlights a study on the effectiveness of public schools compared to private schools in Korea. The abstract:
We show that private high school students outperform public high school students in Seoul, South Korea, where secondary school students are randomly assigned into schools within school districts.
That’s a bold claim. I mean, there are lots of secondary factors that co-
Both private and public schools in Seoul must admit students randomly assigned to them, charge the same fees, and use the same curricula under the so-called ‘equalization policy’.
All right. You reeled me in with a three-hit combo. Bring it on home, study co-authors Youjin Hahn, Liang Choon Wang, and Hee-Seung Yang. Why do private schools outperform their virtually identical public school counterparts?
[…] Private schools enjoy greater autonomy in hiring and other staffing decisions and their principals and teachers face stronger incentives to deliver good students’ performance. Our findings suggest that providing schools greater autonomy in their personnel and resource allocation decisions while keeping school principals accountable can be effective in improving students’ outcomes.
Caveat
Now, these performance results are based on standardized tests, which I touched on a bit last week. I’m not sure to what extent “resource allocation” is code for “can afford to buy textbooks from the standardized test companies” or if that’s just not a concept they really have in Korea.
But still. This is fascinating, if there’s really a statistically meaningful difference in student performance when principals have autonomy in personnel matters. I can’t help but draw parallels with how difficult it is to fire teachers in America, and how there’s basically no meaningful way to tell if a teacher is bad and needs firing.
The double secret caveat is that the performance improvements from the study were on a series of standardized tests. And from where I’m sitting (read: the peanut gallery), evaluations on standardized test scores alone don’t seem super useful. At least not on this side of the Other Pond.
Vaguely Meaningless
For example, this 2012 report from the Annenberg Institute studied teacher evaluation data in New York. The results were inconclusive, to say the least:
[F]or all teachers of math, and using all years of available data, which provides the most precise measures possible, the average confidence interval width is about 34 points (i.e., from the 46th to 80th percentile). When looking at only one year of math results, the average width increases to 61 percentile points.
That is to say, the average teacher had a range of value-added estimates that might extend from, for example, the 30th to the 91st percentile. The average level of uncertainty is higher still in ELA. For all teachers and years, the average confidence interval width is 44 points. With one year of data, this rises to 66 points.
For reference, a 66-point spread on the SAT is the difference between scoring a 700 and a 430 on the math section. If the SAT folks couldn’t determine your performance any more precisely than that, I don’t think many colleges would use it for admissions.
Hopefully, someone devises (devised?) a better way to evaluate teachers than that.