Why the Rare Contextual Recruitment System (CRS) does not adjust grades
We have recently received a number of queries about our thoughts on adjusting grades as part of a contextual recruitment process. We do not adjust grades within the Contextual Recruitment System (CRS), instead we present all the relevant information to recruiters to help them understand it so that they can make their own decisions. This is for a number of reasons that we explored with our Contextual Recruitment Working Group close to a decade ago when developing the system and have continued to monitor to date. This note will outline those reasons in some detail.
Before outlining our reasons for not adjusting grades, it is helpful to reflect on the aims of the Contextual Recruitment System. The CRS aims to help employers to put candidates’ achievements in context by a) identifying candidates who have experienced significant academic, socioeconomic and personal disadvantage and b) identifying candidates who have outperformed their peers in their year groups in their schools. The combination of these two measures means that users of the CRS are able to identify disadvantaged students who have outperformed their peers. This group of disadvantaged outperformers has been shown by independent research reports to go on to perform well in employment, and we are keen to help our users identify them as efficiently as possible.
The first reason we do not adjust grades is that top grades cannot be adjusted up. If a candidate has top grades, they cannot be adjusted up regardless of how disadvantaged they are. For example, imagine a Free School Meals eligible candidate who has achieved A*A*A* at a bottom 10% school where the average grades were DDD. It is not possible to adjust their grades up without inventing a new type of grade or awarding a qualification that they did not in fact take, both of which would be hard to justify. For employers using the CRS, this candidate is easily identified as a disadvantaged outperformer because they would have three flags (one socioeconomic flag and two academic flags) and a PI of 67%. A system that adjusted the grades of disadvantaged candidates would in fact obscure this A*A*A* candidate’s brilliance. By adjusting up the grades of other students who have experienced disadvantage, such a system would fail to highlight the exceptional outperformance of any students who were similarly disadvantaged but had achieved the best grades possible. Our logs of over one million applications to employers using our systems confirm that 10% of applicants from England have achieved A*A*A*, meaning this is a scenario which is very relevant to employers who are keen to identify top performing candidates from all backgrounds.
Outperformance as measured by the CRS’s Performance Index (PI) is a piece of information that can be easily explained and justified. It is calculated in a transparent way, by taking the candidate’s grades, turning them into points using the government specified points systems, and comparing this with the average points per A Level in their school or college in the year in which they studied. Other than ensuring consistency across years, we do not make any judgement calls to produce this metric – we leave those judgements to the Department for Education (DfE). There is quite a lot of work involved in producing the metric however, as it is necessary to maintain an annually updated database of schools across all four nations in the UK to ensure that each candidate’s performance is compared with that of their peers. We remain committed to this detail-oriented approach as it provides users of the CRS with confidence that they are identifying outperformers on the basis of accurate and publicly available information that can be easily explained if questioned.
The CRS was designed to help its users increase the speed and efficiency with which they assess applicants in context, whilst enabling them to take nuanced decisions that reflect their priorities. It is the case that 49% of candidates apply to employers using our systems with AAA+. The CRS allows users to, for instance, sort all of their AAA+ candidates on outperformance and/or disadvantage. This allows for both nuance and speed, as users retain the freedom to make decisions based on the information in front of them. For users for whom a specific level of attainment at A Level is important – for example, those who use grade cut offs - it also allows for them to assess applications on the level of attainment that was actually achieved by a student. In a system that adjusts grades, this nuance and freedom of decision risks being lost.
Finally, it is the case that justifying adjustments is notoriously hard. To be able to justify reclassifying published grades – which are based on mark schemes and complex grade boundary setting and norming processes – would require a very high level of confidence that the longitudinal data used to inform the decision justifies the adjustments the algorithm does and does not make. As this data informs decision making in regard to job offers, it would need to be possible to explain why, for example, a candidate who was eligible for Free School Meals did not get their grades adjusted upwards whilst another candidate did. It would also need to be possible to explain to an A*A*A* candidate on Free School Meals why their peers who experienced similar circumstances to them have been awarded a grade adjustment, whilst they have not (because such a thing is not possible).
This strikes us as a very difficult thing to do, especially on an annual basis. For example, Covid resulted in temporary grade inflation due to teacher assessed grades. Grades will be returning to 2019 levels this year, which has involved an involved process and a communications strategy from the DfE and its associated marking bodies. To account for this fairly within a system that adjusts for grades would be very challenging to explain and justify to candidates, especially given the fact that the government itself struggled to design a fair way to assign grades during the pandemic – it had to change its methodology twice in response to public uproar.
Some of our clients have grade requirements and some of our clients aren’t interested in grades in the absolute. But they all want to know who the extraordinary outperformers, the high PI candidates, are. Our system tells you that. Simply adjusting grades – or being unable to adjust them because candidates are already achieving the very highest grades - does not.
About the Author

Naomi Kellman is Rare’s Head of Research. She studied Philosophy, Politics and Economics at the University of Oxford. On graduating in 2011, she joined diversity specialist Rare. In this role she founded Target Oxbridge, a programme that has helped over 400 Black African and Caribbean students secure Oxbridge offers, and currently supports hundreds of students a year. She also designed Target Bath, a programme that supports 50 students of Black heritage to apply to the University of Bath each year. Naomi spent 2012 to 2015 working on education policy at the Department for Education and the Treasury, and served three years as a secondary school governor. Naomi has co-founded the BAME Fast Stream Network and the Oxford Black Alumni Network, and has made appearances on BBC News, Channel 5 News, ITV News and Sky News to discuss diversity in education and recruitment. She is a Trustee for The PRIME Commitment, a member of University of Oxford History Faculty’s External Advisory Panel, and the 2021 Inspiring Women in Business Award’s Manager of the Year.