How do we rate schools? Part III: The API

This post is part of a series on online school rating systems. Click to read the first or second post.

Remember Gray Davis? Yeah, he was that governor who got recalled in 2003.  While people might remember him for his “excessive” fundraising, I remember him as the person who radically changed education in California, for better or worse.

In 1999, he signed the Public Schools Accountability Act, which created the Academic Performance Index (API). API gave all schools a score from 200 to 1000 based solely on their test scores.

For the first time, it was extremely clear that there were many schools that were completely failing (in terms of test success). Many schools had scores lower than 600, when the state goal for a proficient school was 800 or higher.

I would argue that it was the advent of the API score that gave the original strength to the charter movement in California. Many charter schools were able to argue that they had better API scores than their local schools. This convinced many parents to sign up, which created waitlists, and donors rushed in to fund more charter schools.

I definitely think that part of the reason that charter schools succeeded on the API score was good instruction. But I also think that another reason they succeeded was that they took the time to look at the formula of how API was calculated.

And the formula was extremely complicated. 

But if you take a close look at the formula, you begin to realize that there were a few ways to play the testing game so that you could boost your API score.

  • First, you had to realize that the API score was weighted toward English scores. In every grade, English made up between 48% and 60% of the API’s impact, while math made up only 32% to 40%. Science and history tests, which were only administered in 5th and 8th and high school, made up as little as 5% of the API. In other words, it was important to stress your English scores before math scores, and science and history didn’t really factor in too heavily.
  • Another key attribute was the score itself didn’t matter – it was the student’s “proficiency level” that mattered. So it didn’t matter if a student scored a 305 or a 349, those scores were both counted the same because they were both considered “Basic”. As a result, if you really wanted to have an impact on kids, you needed to focus on the kids “on the cusp” of the next level.
  • And finally, the API effectively incentivized working with your most struggling students. Moving a student from “Far Below Basic” to “Below Basic” was worth almost three times the value as moving a student from “Proficient” to “Advanced”.

Now, I am not saying that this is necessarily a bad thing. It is a good thing to focus on your struggling students. It is a good thing to focus on literacy across the curriculum. It is a good thing to push students to the zone of proximal development.

But what the API created was also an information gap. For schools who knew how to “play the game” with the API, they could find ways to manipulate their score higher (again, in ways that definitely helped students). But for schools who had no knowledge of the priorities that are baked into the calculation, there was a lot of struggle.

And so I personally like to think of the API mostly as a measure of a school’s leadership. If a school had a leader who knew the in’s and out’s of the API system, they could harness their school’s resources toward API success. And for schools without leaders that knew how to do that, they often allocated resources in ways that did not produce API success.

There are a lot of criticisms of the API system. The score neglected to use other measures of school success such as college readiness, english language reclassification or suspension rates. It also created an unhealthy incentive for schools to ditch history and science and focus on English and math. And of course, it tried to pin a school down to a single number, which is generally frowned upon in the current school context.

API also received a lot of criticism for not being a growth based system. Instead it simply measured proficiency – and as a result, schools that had low scores but made massive improvements could still look terrible in comparison to schools that made zero improvement but had high scores to start.

As a result, California’s new system, which will debut later this month, takes both into account. But will it just be another system where people need to “play the game”? And is that even a bad thing? More on that new system next time….

 

One Reply to “How do we rate schools? Part III: The API”

  1. Valerie Braimah says:

    As always, an incisive and clear summary of the API. I would also add that for high schools, I believe the CAHSEE (an 8th grade level test) was also very heavily weighted, and a number of schools gamed their scored by doing lots of CAHSEE prep, and thus ensuring their 10th and 11th graders could pass 8th grade material with proficiency. Also not a very high bar. Is that your understanding too?

Comments are closed.