Saturday 12 March 2011

More MySchool - just to annoy Cav

The theme for this evening is - chuck around a few hypotheses and see what sticks. I was always useless at defining a null hypothesis way back in my school days, so I'm not even going to try to do that now. I always buggered it up. I've spent the afternoon pulling down more stats and combining them to see what falls out.

Big caveat up front - this covers a limited number of primary schools in NSW. This is interesting stuff, but I'd need access to lots more data before staking my life on the results.


First idea - is there a relationship between the ratio of teachers to students and the amount of funding per student?

Answer seems to be "yes". I would expect that to be true since teacher's salaries are the biggest cost for schools. The R-squared is 0.688, making it a good fit. Part of the remaining variation could be explained by variations in mix of teacher salaries across schools - that is, one school might have 10 newbie teachers on $50k, and another might have 10 old farts on $80k. If they have the same number of students, the teacher/student ratio will be the same, but the second school will require funding 60% higher.

Just remember this - with state schools, over 80% of the bills are paid centrally. The schools never see much money - head office pays the staff, pays the maintenance bills, buys the computers etc etc. The numbers that have been released by ACARA are the result of the head office bureaucrats at the DET taking about $9 billion worth of expenditure and then trying to work out how to divvy it up between a few thousand schools.


I then split the stats for Catholic and state schools. Both show a very strong relationship between funding per head and student/teacher ratios.


However, we have to be careful when interpreting this - which is driving what? Remember, funding for schools is producer driven, not consumer driven. It's a chicken and egg question - does a school have a lower student/teacher ratio because it gets more funding, or does it get more funding because historically it had a low student/teacher ratio? Budget setting in the public sector is a strange beast - the amount of money you get can be based on the most illogical reasoning's at times.


Moving on - is there a relationship between the size of the school (number of students) and the NAPLAN scores? Is a small school better than a big school?

The graph above suggests that it makes no difference whatsoever. Big or small - doesn't matter. It might make a difference to the range of subjects you can study, but for NAPLAN results - zip.

However, I have coloured the data points to differentiate rich areas from poor areas. Blue are wealthy, brown are poor. There's a pretty clear split - the wealth and educational background of the parents counts an awful lot more.

Why do bureaucrats like big schools? Because it makes their life easier when dealing with a small number of big entities rather than a large number of small entities. They'll claim of course that bigness brings the benefits of scale, but the only things that I have ever seen scale well in a bureaucracy are stupidity, ineptness and bloodymindedness.


I then took the NAPLAN scores for a bunch of schools and looked at the performance of their grade 3 cohorts in 2008 and those same cohorts in grade 5 in 2010. I wanted to see how their scores improved over that period across a range of schools. Chatswood is a wealthy area - the average NAPLAN total went up 425 points. Why did some schools jump further than others? Has money anything to do with it?


Just to show you what I'm doing, here are 5 schools from Chatswood. The rhomboid at the bottom is their grade 3, 2008 NAPLAN score. The square up above is the NAPLAN score for that same class when it was tested in grade 5 in 2010. Along the bottom axis is funding per student. Did having more money produce a bigger jump in grades? It did for one school, but not the rest.


Here's the same schools, but with the data presented a different way. I've shown the student/teacher ratio on the bottom axis this time. The further a school is to the left, the better the student/teacher ratio (ie, supposedly smaller classes). The school with the biggest jump (in purple) had a low ratio (small classes) - however, the 2nd biggest jump was the blue school, which had a higher ratio. The rest were all the same - didn't matter if they had smaller classes or bigger classes, the improvement was the same.

The green school and the purple school both started at low NAPLAN scores, and both had small class sizes. The purple school leapfrogged ahead of two of the others with larger class sizes, but the green school didn't. Small class sizes might be beneficial - or they might not. Anyway - you need to crunch lots more than five schools to get an answer.


Which is what I did. I looked at how much the NAPLAN score went up for 70 schools, and then compared that to the student/teacher ratio. According to the R-squared, there is no relationship whatsoever. It's a paltry 0.03.


If you look at the percentage increase in NAPLAN scores vs the student/teacher ratio, you get a slightly better R-squared of 0.15, but this still means there is next to no relationship there at all.


I then split out the Catholics and the state schools, just to see if there was any difference. With the Micks, it's next to nothing.


State schools gets a slightly improved relationship, but an R-squared of 0.23 still means that there is two thirds of bugger all correlation.


Here's where things could get interesting. The blue dots are the grade 3 NAPLAN scores from 2008 for schools in a rich area and a poor area. Along the bottom axis is the ICSEA score - the measure of family wealth. There's a pretty clear pattern - schools in poor areas get much lower NAPLAN scores than schools in wealthy areas. The difference between the worst performing school and the best is 682 points - 1783 vs 2465. The schools in wealthy areas start out with an advantage of up to 38%. The difference between the averages across the two groups is about 14%.

The brown data points are the grade 5 NAPLAN scores two years later for those same schools. The same pattern appears - the rich schools stay ahead, but the average gap shrinks to 9%. Shockingly though, the lead of the best school over the worst stays at around 36%.


I've circled the lowest performing school from 2008.

Have a look at where its students get to by grade 5. Compare it to the schools in rich areas circled in green on the right. That green circle is the rich school scores from grade 3. For the grade 5 kids in the worst school in the poor area, when they hit grade 5, they are performing at the level of the worst performing rich area schools - from grade 3! Not only are they two years behind, but they're even at the back of that pack!

PS - remember, these are all state schools and low fee Catholic schools. What you're mainly seeing here is the difference in performance between a set of state schools in a rich area and a set of state schools in a poor area. In case you are wondering about funding, the poor state schools got an average of $9,498 each and the rich state schools got an average of $8,480 each - about a thousand bucks less (or the poor schools got 12% more). The best funded poor school got $10,894 - the lowest funded rich school got $7,057 (or the best funded poor school got 50% more than the worst funded rich school).

That rich school with the least money - only two schools out of this entire group got higher NAPLAN scores. The school with the highest NAPLAN score (in a rich area naturally) actually got the second lowest funding allocation out of all of them.

Anyway, that's just a taste of the sort of number crunching that can be done with these stats. ACARA, being a bunch of bastards, have made it as difficult as humanly possible to extract the numbers to do this sort of analysis, so it's both time consuming and shallow.

More posts on MySchool:







Damn, stuffed up teh numbering.

3 comments:

1735099 said...

BOAB
Love your work. Having said that, it reinforces some long-held beliefs that I've entertained for years -
1. There is a relationship between SEL and school performance.
2. No matter how much money you throw at the problem - it makes little difference.
3. The quality of the school community is significant. Shared values provide the glue that holds these communities together.
The other major factors (not measurable using NAPLAN data) are school leadership and teacher effectiveness.
When you've worked out a way of analysing these last two factors mathematically, you'll be able to patent the process and sell it, and retire in style.......

daddy dave said...

The first two charts don't tell us much... just that smaller classes mean that education is more expensive per student. The rest are pretty interesting.

Particularly interesting is the fact that there's no correlation between school size and performance; and that there are huge performance differences across rich versus poor schools.

Boy on a bike said...

I put in the first two graphs because I wanted to test the idea that smaller classes mean better results - because small costs lots of money. Small classes mean we can't increase pay for teachers - we can either have more teachers, or higher paid teachers, but not both.

We've gone the "more teachers" route - the question is this; is that route worth it? Can the numbers demonstrate whether it is worth it or not?