What Gets Measured Gets Done: Improving Performance Measures in the Next Generation of Accountability under ESSA
A policy brief by Douglas N. Harris and Lihan Liu examines how school ratings would change if states evaluated school performance differently.
Over the past three decades, state governments have increasingly held schools accountable for their performance, especially student achievement levels. Many states, as a result of the new federal ESSA law, are also revising the ways they measure and use school performance to assign school performance ratings, such as A‑F letter grades. Most of this brief focuses on the question: How would school ratings change if states measured school performance differently? In particular, how much would school performance ratings change if we added measures, like college entry, that are strong predictors of students’ long-term life success? Also, how much would school performance ratings change if we focused not on outcome levels but on schools’ contributions to those outcomes, sometimes called “value-added”? We address these questions using data from Louisiana and find: If policymakers measured high school performance not only with test scores and graduation levels but also with college entry levels, then our analysis suggests that 28.6% of high schools in Louisiana would receive different performance ratings (e.g., moving from a letter grade of F to D). If school performance measures were based on a 50 – 50 mix of achievement levels and achievement value-added, instead of levels alone, then 24.2% of elementary schools and 32.9% of high schools in Louisiana would change performance categories. Value-added can also be used to evaluate school performance on outcomes other than achievement. If high school performance measures were based only on graduation, 22.1% of Louisiana high schools would change performance categories if performance was measured by a mix of graduation levels and value-added, instead of graduation levels alone. If high school performance was evaluated solely on college entry, 30.7% of Louisiana high schools would change categories if performance was measured by a mix of college entry levels and value-added, instead of college entry levels alone. For the above analyses, the results in New Orleans’ elementary schools mostly mirror the results across Louisiana. However, New Orleans’ high schools are more likely to change performance categories than Louisiana high schools when using value-added to high school graduation and college entry. We estimate the practical impact of shifting toward value-added by simulating a state policy of closing low-performing schools for four consecutive years, similar to the policy used in New Orleans during 2009 – 2014. Switching from test score and high school graduation rate levels-only to equal weight on levels and value-added when choosing which schools to close would increase annual student achievement levels for the bottom fifth of all schools statewide by about 0.4 percentiles and increase the statewide high school graduation rate by 0.4 percentage points. When we include college entry alongside test scores and high school graduation, switching from levels-only to a mix of levels and value-added would increase the statewide college entry rate by 0.4 percentage points. Since the choice of performance measures is important, we also ask an additional question: As part of their ESSA plans, how many states are planning to add college outcomes and value-added measures to their performance metrics? Only 24 states are planning to use value-added or a similar measure according to state ESSA plans, and only 8 of these states are planning to give value-added a weight of 40% or higher in their overall performance measures. While 18 states are planning to include college “readiness,” only 3 states plan to use actual post-secondary outcomes as school performance measures. If states seek to hold schools accountable for what they can control, and for those outcomes that are most predictive of students’ longterm success, then most states’ ESSA plans are still placing too little emphasis on value-added measures and outcomes like college entry. What gets measured gets done. This is evident in our simulations of school closure and takeover, but other research also clearly shows that the measures matter in more subtle and indirect ways, such as when parents collect information to choose schools. If we can improve school performance measures, then our analysis shows that we can improve actual student outcomes.