In the whole of this sorry mess which is the exam debacle in England nobody, to my knowledge, has mentioned the technical aspects of assessment, namely norm referenced and criterion referenced examination systems. From what I have read and heard, the June exams have basically been norm referenced, i.e. a fixed percentage of the total cohort of examinees are allocated to each grade so the grade pass mark will go up and down depending on the degree of difficulty of the exam and the severity of the marking. But in every year, for example, the top 10% of of the marks awarded will be allocated to A grade and so on.
However it must also be apparent that in previous years the exams have been criterion referenced. In other words if somebody sitting the exam gets a certain set of the questions correct, an A grade is awarded and similarly for each grade. No account is taken of the numbers achieving any grade. This must have been the case if more and more pupils were getting higher grades over the last 20/30 years as has been alleged. Norm referencing would not allow this to happen. For criterion referencing to be fair from year to year, a great deal of trouble and effort must go into setting very comparable exams year after year in terms of the criteria to be used and the marking must be very carefully moderated. In the case of the exams board in England this has not been happening, probably for commercial reasons.
To me it is obvious that the goal posts have been changed this year but the norm referencing has only been applied to the June results. This is effectively what was said this morning by the OFQual Regulator speaking on Radio 4 though she did not use the technical term.
The case for unfairness is not between the January and June results but between the June results and previous years when none if any norm referencing has been applied. The goal posts have been changed between 2011 and 2012, not within 2012.