Later this month, California will unveil its new school rating system, The California School Dashboard (which I’m going to call the CSD). It is a very different approach to rating schools for our state than the old system, the API, which reigned supreme for over a decade.
Here are the basics of how it works:
There will be 7 “state indicators” and 4 “local indicators” of each school. The state indicators are given a color-based rating, while the local indicators are given a “met” or “not met”. Essentially, when you look at the dashboard, it will look something like this:
That’s right – The CSD will not have a single number describing school success. Your school will no longer be able to say “We grew 100 API points in a year”, and it is going to be very hard to say “we have highest performing school in our area.”
Instead, schools are going to be clamoring to “get to green” (or blue) in all the indicators. So, what do those colors mean?
Ok, that thing is a bit hard to read. And the color scheme is just so…Windows 98.
The new system attempts to balance proficiency and growth. A school’s rating depends on how well the students perform, but also, how that school has performed year-over-year.
So here is an example of how to read it. If your school is “low” on an indicator, but has “increased” over last year, you would get a yellow, like this:
This has some interesting outcomes: For example, schools that are “Medium” in performance but “Increasing Significantly” can now get a high rating. The theory here is that schools should be given accolades for showing improvement, not just for being a strong performer.
Furthermore, only 3 of the 11 indicators are based on state tests. Local decisions, like the suspension rates and how a school deals with chronic absenteeism, now have a place at the table in the rating system.
And as if the CSD needed to put more space between itself and the API, it also dispatches of the API’s old emphasis on performance level bands. It no longer matters if a student has “met” or “exceeded” standards. What matters is how far away their numerical score is from the average. They call this new measure the DF3 (Distance from 3) because it is simply the distance a student is from a level three score. A score could be positive or negative.
To see how this plays out, compared to the old system, let me give you an example:
The only difference between the two classes is Isaac’s score. In the old system, his score change would not matter because he was still in the “basic” level. But in the new system, which doesn’t rely on proficiency levels, his score suddenly matters.
This is one of the many priority shifts that is created by the new rating system. In my opinion, there are five essential priorities that the CSD rating system conveys on to schools:
- All student growth matters equally. While the old system put extra value on pushing the lowest performing students higher, the new system values each student’s growth equally. This could be good but also could be problematic. For example, a school could (in theory) boost their score by only focusing on their highest students instead of their lowest ones.
- Suspensions are generally bad. The new systems effectively punishes schools for suspending a lot of students. Schools that want to score high on the suspension indicator are going to have to find alternative methods for discipline.
- Get your kids to come to school. Absenteeism has long been public data, but it was never this prominent and it is now it is the first thing you see (because A comes first alphabetically). It effectively encourages schools to be much more aggressive on the SARB process.
- Ratings are fleeting. Scores could swing 4 or 5 levels in just one year. Imagine a school that one year has a red rating because they have low status, and had significant decline. The next year, they could have significant improvement and increase to medium – that would put them at green. The next year they could have a significant decrease and as result have low performance again; they would bounce back to red.
- The state no longer wants to rank schools, just rate them.
This last one is probably the biggest change. As I noted in the last post, I would argue that the API was a major factor in the rise of charter schools in California. Schools could clearly argue that they performed stronger than others. But the CSD system is much more murky. Schools are not ranked, they are simply rated.
Which brings me back to why I wrote this whole series in the first place. I look at this new system, and I wonder – will parents use it? Are parents willing to sift through 11 very complex data indicators and balance their choices across all of them? Or will the resort back to Niche.com and greatschools.org, which give a numerical rating?
This is not to disparage parents at all – I am a parent and there is just way too much information out there. I get analysis paralysis just trying to pick stuff for my kid on Amazon – and I end up just buying which ever product has the highest customer rating.
You’re so right that API wasn’t all that useful to parents, though it was often all we had to go on.
The real question parents always want to know the answer to is “how will MY child do at this school.” If you’ve got a struggling learner in a Chinese immersion program where teacher mock kids with low scores (I’ve seen it happen) than it’s not a good school for that kid, while a kid with a knack for memorization and languages could thrive there. A school that really focuses on literacy is great if you’ve got a kid who comes in not reading but would be bad for a kid who arrives in Kindergarten already reading at 2nd grade level and spends Kindergarten bored while they work on Alpha Buddies.
We ask our schools to meet the needs of every single student in every class, which is a lot to ask. Parents are trying to figure out what the differences are that make one school a better fit for their kid than another. I can’t quite tell how the new scoring will help them do that any better than the old API did. Any thought on how parents can tease that out?
I’m curious about the notion that accelerating strong learners might come at the expense of weaker students. This feels like a straw man but I suppose stranger things have happened. Are there reports about this kind of circumstance in the academic education literature?
I think a rebalance between growth and proficiency is long overdue for public schools. ALL CHILDREN deserve an appropriate, holistic, challenging and nurturing education, regardless of their test results in a given year or over time.
Thanks for this! Good distinction between rank and rating. My first question when I was a common core coach at PUC was: how will API work now? No one could answer me then, but I guess this is it. Seems like a change for the better, but I agree, it will make it more difficult to make sense out of for parents/others looking for a quick answer.