search cancel cancel-medium

New Policy Brief Released: "What Gets Measured Gets Done: Improving Performance Measures in the Next Generation of Accountability under ESSA"

May 14, 2018

A new study from the Education Research Alliance for New Orleans at Tulane University examines how school ratings would change if states evaluated school performance differently.

As the nation­al debate on how states should mea­sure school per­for­mance under the new fed­er­al ESSA law con­tin­ues, the Edu­ca­tion Research Alliance for New Orleans released a new study that explores how school rat­ings would change if states eval­u­at­ed school per­for­mance dif­fer­ent­ly. The study exam­ines data from Louisiana from 2006 to 2014 and finds that 28.6% of Louisiana high schools would receive dif­fer­ent per­for­mance rat­ings (e.g., mov­ing form a let­ter grade of F to D) if school per­for­mance mea­sures eval­u­at­ed not only test scores and grad­u­a­tion rates but also col­lege entry lev­els. Lead author Dou­glas N. Har­ris said, If the goal is to hold schools account­able for fac­tors most asso­ci­at­ed with stu­dents long-term life suc­cess, then pol­i­cy­mak­ers are prob­a­bly still too focused on test scores.” Har­ris and co-author Lihan Liu also ana­lyze how school rat­ings would change if states eval­u­at­ed schools on stu­dent growth, or val­ue-added, mea­sures in addi­tion to out­come lev­els. The study finds that if school per­for­mance mea­sures were based on a 50 – 50 mix of achieve­ment lev­els and achieve­ment val­ue-added, instead of lev­els alone, then 24.2% of ele­men­tary schools and 32.8% of high schools in Louisiana would receive dif­fer­ent per­for­mance rat­ings. Growth mea­sures are bet­ter indi­ca­tors of what schools actu­al­ly con­tribute to stu­dent learn­ing because they take into account where stu­dents start when they first walk into class,” says Liu. By focus­ing so much on out­come lev­els, pol­i­cy­mak­ers end up pun­ish­ing schools just because their stu­dents start­ed fur­ther behind.” The study esti­mates the prac­ti­cal impact of using both val­ue-added and lev­els to mea­sure school per­for­mance by sim­u­lat­ing a state pol­i­cy of clos­ing low-per­form­ing schools for four con­sec­u­tive years, sim­i­lar to the pol­i­cy used in New Orleans from 2009 to 2014. The authors find that includ­ing both lev­els and val­ue-added mea­sures for test scores and grad­u­a­tion rates when deter­min­ing school per­for­mance rat­ings would increase annu­al stu­dent achieve­ment lev­els for the bot­tom fifth of schools statewide by about 0.4 per­centiles and increase the statewide high school grad­u­a­tion rate by 0.4 per­cent­age points. Their analy­sis also sug­gests that switch­ing from lev­els-only to a mix of lev­els and val­ue-added for per­for­mance rat­ings based on test scores, high school grad­u­a­tion, and col­lege entry rates would increase the statewide col­lege entry rate by 0.4 per­cent­age points for high schools. It appears that account­abil­i­ty poli­cies are unnec­es­sar­i­ly sac­ri­fic­ing stu­dent out­comes,” says Har­ris. The authors also not­ed that their esti­mates only cap­ture part of the prob­lem. For exam­ple, par­ents increas­ing­ly can choose the schools their chil­dren attend, and they also pay atten­tion to the per­for­mance rat­ings. The authors con­clude that if state pol­i­cy­mak­ers seek to hold schools account­able for what they can con­trol, and for those out­comes that are most pre­dic­tive of stu­dents’ long-term suc­cess, then most states’ ESSA plans are still plac­ing too lit­tle empha­sis on val­ue-added mea­sures and mea­sures like col­lege entry.