Is the Core Index correlated to the API score?

One of the critiques of the API score used for over 10 years was that it only focused on test scores to rank schools. In reaction, the new Core Index score is only 60% based on test scores, while 40% is based on other factors like English Learner progress, absenteeism and suspensions. Yet, in previous posts, I have noted that the richer, whiter schools still tend to rise to the top of the Core Index. Even though the Core Index has attempted to level the playing field with measures that account for the least represented groups on each campus, it seems that the metric isn’t really leveling the playing field.

Well, if that is the case, then how close is the Core Index score to the old API score. Below is a scatterplot spreading the API scores across the X-Axis and the Index Scores across the Y-Axis. You can see a clear positive association between the two sets of scores, meaning that schools that tended to do well on one probably did well on the other:

Yet, graphics can be tricky, so I ran this through a regression, factoring in ethnic percentages, socio-economic status percentages, english learner percentages and special needs percentages.

image

Out of all the categories – three are highly predictive of the Core Index Score: Students with Special Needs Population, English Language Learner Population and the schools previous API score.
With those three categories, the p-level resulted in a rejected null hypothesis, in essence meaning that these results were essentially so rare, that they show correlation. Many disagree on how to interpret this type of data, but a low P-Level, below .05 is generally accepted to be significant in correlation, and in English Learner population and API Score, the P-Level is extremely low, .000000001 and .0000003 respectively.

Ok, enough statistics mumbo-jumbo – essentially, this is all saying that the Core Index is correlated to the old API score. This presents either two possibilities – The API was a good measure of school success, and so is the Core Index and their correlation proves their reliability. On the other hand, it could be that neither score is really valid and they are just reflecting other things like English Learner and Special Needs Populations.

Either way, I find it concerning that we are spending so much time and money simply switching around the systems to get a very similar result. I’d like to see scores that are more relativistic. I believe that schools are given inputs (the student population given the racial and socio-economic inequalities of the system) and they produce an output (the education level of the student after being in the school). But when schools are given vastly different inputs, it shouldn’t be a surprise that their outputs are based on those inequalities. I would like to see a score that is based on both the inputs and outputs of schools. See my FO’REAL score for my attempt to do just that…

***Update: Since I posted this, it occurred to me that I haven’t controlled for different levels of schools (elementary, middle, high) even though they all get the same numerical of score. I am working on an update for that, but let’s just say, controlling for those make the model significantly more predictive.