Boromike you are right as far as it goes, I think. What would happen in a usual year is marks are normalised based on averages across the country, so if 10% get an A* last year across the whole country, then the top 10% define where A* ends and an A begins, and so on.
The problem with the algorithm is it is used on a school by school basis not an average across the country. This, as you say unfairly disadvantages a school that performed poorly in recent years, irrespective of how good the class of 2020 is.
I think the disparity could be larger in smaller class sizes just because they have a small representative sample, but that could also overestimate pupils achievements. The smaller the sample size the bigger the margin of error.
My belief is the algorithm was deliberately constructed to help private schools, and by default disadvantaged the government funded school system. The decision makers at OfQual may well have kids in private education and were biased, probably intentionally towards these institutions.
The main thing that is missed in all this, is what the government have done is illegal, there is no foundation to allow this to happen, other than an 80 seat majority, of course. GDPR specifically forbids this to happen, and does so clearly and unequivocally. The government do not have a legal leg to stand on.
This will result in many legal challenges and the government will, I think, do a U-Turn when they realize the scale of the issue they are facing.
The GDPR issue is not to do with personal identifiable information, it is to do with machine assessment of data, from GDPR :I don't think it was intentional, it comes from a history of only looking at the big picture, instead of the detail contained within it. My guess is the conversation only went as far as: make the grades similar to last year, assume schools perform the same as they did last year so we don't get inflation/deflation. None of them looked at the detail until it was too late. It's more incompetence than anything. We see the same thing in every government department. GDP is up so must be good for everyone when there are hundreds of situations where there are winners and losers, and similarly it is the usual winners always winning and the usual losers always losing.
There was a semi-simple solution to the current process which would have solved a big chunk of the problems and that was instead of doing it on a school by school basis then instead they could have grouped schools into similar performing schools. That makes sample sizes go from 20 per school to 200, 500 or whatever. Anyone that has ever worked with data knows that applying models to small sample sizes causes all sorts of issues if there is any variability. Grouping schools would have kept the same number of grades but increases the chances of people being where they should be on the curve. It is unlikely to have an exceptional cohort of 200+ compared to how a couple of exceptional students in a class of 20 can make the model useless.
I'm not qualified to say whether there are GDPR issues but my guess will be that they don't use students names, they use pseudonymous data with certain metrics that results in an outcome. The school performance data comes from the school, not the student. The marking was done by exam boards still wasn't it so It's not really different to them handing grades out how they do every other year.
The GDPR issue is not to do with personal identifiable information, it is to do with machine assessment of data, from GDPR :
Article 22 of the GDPR has additional rules to protect individuals if you are carrying out solely automated decision-making that has legal or similarly significant effects on them.
It is completely unambiguous and the government have breached the regulation.
I'm guessing the usual fanboys are defending the government from a position of ignorance again, GDPR is more than just sharing of info. The government are guilty, as per normal this government takes immoral/illegal/risky actions and then plays dumb about it later.Stop being fans boys, they clearly broke the law and either didn't know which means (ironically) that they are in jobs that are beyond their mental capability OR they knew it was illegal and didn't care, which makes them morally reprehensible.The GDPR issue is not to do with personal identifiable information, it is to do with machine assessment of data, from GDPR :
Until Gove’s reforms as SoS for Education when he abolished any form of ongoing continues assessment or coursework in preference for a single terminal examination, students would have had 60% of their final grade already known and so finding a solution to this unimaginable situation would have been a little easier. Gove insisted on heading back to a single test being the only way to measure 5 years of education on a secondary school and 2 years in a sixth form so other than centre assessed grades (teachers assessment) we have nothing formal to rely upon. The initial assessment from teachers for A Levels did show a 12% increase on 2019 which the government baulked at but instead of looking at it sensibly they lurched to the right and punished the most disadvantaged. Simply shocking
The algorithm used the rankings as a starting point, then analysed them via the algorithm all analysis was via algorithm, Interestingly enough the legislation was passed to protect people applying for jobs that could then be discarded pro-grammatically by scanning, non-english sounding names, for example. It applies equally in this scenario, as the grades were based only on the algorithm, with no one doing a check after the algorithm ran. Had the grades been scrutinized by a human reader after the algorithm, then they would not be in breach of GDPR. They weren't so they are.They would surely argue that it isn't solely automated, it is based on the schools internal ranking of the students. If they had given a list of students with no other metric and just randomly given out grades that would breach but they have the ranking which is from the school. It's different numbers but the process isn't much different. It's receive data, apply model, distribute data.
The algorithm used the rankings as a starting point, then analysed them via the algorithm all analysis was via algorithm, Interestingly enough the legislation was passed to protect people applying for jobs that could then be discarded pro-grammatically by scanning, non-english sounding names, for example. It applies equally in this scenario, as the grades were based only on the algorithm, with no one doing a check after the algorithm ran. Had the grades been scrutinized by a human reader after the algorithm, then they would not be in breach of GDPR. They weren't so they are.