蜜桃影视

Explore

DeAngelis: 5 Reasons Why 5-Point Takedown of AEI Report on School Choice, Tests, and Long-Term Outcomes Misses the Mark

They say you shouldn鈥檛 throw stones if you live in a glass house. Fordham Institute鈥檚 president, Michael J. Petrilli, recently threw five of them. He released a five-part series that supposedly showed why an American Enterprise Institute report incorrectly concluded that 鈥.鈥 But Petrilli鈥檚 of the AEI analysis failed to invalidate its conclusions or to establish that 鈥.鈥 Here鈥檚 what he got wrong.

The most important weaknesses of the five-part series had to do with methodology. First, all of Petrilli鈥檚 re-analyses were based on a highly truncated sample. In fact, he dropped over a third of the original review鈥檚 studies linking test scores to high school graduation. Petrilli believed this was justified because they were not what he considered 鈥溾 school choice programs. For example, he argued that career and technical education school evaluations should be dropped because 鈥溾 even though those schools spend less time shaping skills that are captured by standardized tests.

But that is precisely the point. A disconnect between programs鈥 effects on standardized test scores and on long-term outcomes suggests that test scores are not good proxies for long-term success. And if we regulate teachers and schools based on them, educators may have a perverse incentive to focus less on the character skills that are necessary for true lifelong success.

And, in every single case, the dropped studies included schools that students chose to attend. But it shouldn鈥檛 even matter if the studies were of schools of choice. Finding a divergence between short- and long-term outcomes 鈥 from any type of educational evaluation 鈥 should cause us to question the validity of test scores. Put simply, Petrilli should not have dropped over a third of the original report鈥檚 observations.

But assume that dropping observations was a good call. The much more astonishing methodological error was in counting null results as positive or negative. Petrilli argued that it would be 鈥渞easonable鈥 to look for matches by 鈥.鈥 No serious social scientist would call that approach 鈥渞easonable.鈥 This is because null results are statistically indistinguishable from zero.

But even when treating zeros as positive or negative, Petrilli still found disconnects between test scores and high school graduation 35 percent of the time for math and 27 percent of the time for reading. However, the original report found that 61 percent of the effects on math test scores 鈥 and 50 percent of the effects on reading test scores 鈥 did not predict effects on high school graduation. In either case, effects on test scores are unreliable predictors of effects on attainment.

But that鈥檚 not all. The literature finding divergences isn鈥檛 limited to high school graduation and college enrollment. I have that exist in the most rigorous private school choice literature. I鈥檝e already found 11 disconnects between private schools鈥 effects on test scores and their effects on other arguably more important educational outcomes, such as of others, , effort, in school, and adult . For example, an experimental evaluation of a private school voucher program in Ohio found that winning the lottery to attend a private school had no effects on test scores but a in a lab setting.

And methods aren鈥檛 the only problem. There are some important logical errors to note as well.

Petrilli correctly points out that higher graduation rates could simply mean that individual schools have lowered their standards. In other words, high school graduation rates can be gamed. We have recent evidence of this . But Petrilli fails to point out that the same problem of gaming also applies to standardized tests. We also have lots of evidence of this from . In fact, the corruption involved with using top-down metrics 鈥 of any kind 鈥 for accountability is so widespread that social scientists have given the principle its own name: . This is just another reason we should not regulate schools based on top-down metrics like test scores or even graduation rates.

But assume that no disconnects existed in the literature. Let鈥檚 also assume that test scores were indeed valuable predictors of all long-run outcomes we actually cared about. And let鈥檚 further assume that it was impossible to game the metric.

Regulators would still have a severe knowledge problem. How would they know which schools were the best at shaping test scores? Average test score levels would tell us nothing about how well the schools improved them. However, we could look at test score growth instead. And if the regulators were highly informed, they could use one of the most rigorous econometric methods social scientists currently have to determine schools鈥 effects on test scores: value-added methodology. The problem is that value-added methodology . By definition, schools of choice fail that assumption. Because, you know, kids don鈥檛 choose their schools at random.

In other words, even if we all believed test scores were valuable, and even if regulators used the best methodology available, they could still close down schools that were doing good things for their students.

But what happens when regulators close schools that are actually low-quality? Obviously, this causes disadvantaged children to switch schools, which itself has been found to . But that isn鈥檛 the only problem. Closing an objectively low-quality school could mean that children are displaced into an even worse institution. And there is absolutely no guarantee that a better institution will magically pop up.

The fact is, several studies show that test scores are weak proxies for the outcomes we actually care about. The weak predictive power of test scores suggests that policies incentivizing teachers and schools to improve these crude metrics could actually harm students in the long run.

But families already know this. When given the chance to choose their children鈥檚 schools, families consistently over standardized test scores. Maybe families know a little something about their own kids that the experts don鈥檛 know. And maybe the experts should learn to leave them alone.

Corey A. DeAngelis is an education policy analyst at the Cato Institute鈥檚 Center for Educational Freedom.

Did you use this article in your work?

We鈥檇 love to hear how 蜜桃影视鈥檚 reporting is helping educators, researchers, and policymakers.

Republish This Article

We want our stories to be shared as widely as possible 鈥 for free.

Please view 蜜桃影视's republishing terms.





On 蜜桃影视 Today