My Loss is Your Learning Gain

Liz Morrish discusses some new ways the Conservative government will seek to assess and rank universities. ‘Learning gain’ is about to be ‘a thing’.


It is just over two weeks after the General Election, and our thoughts turn to the prospect of more cuts in public spending, a new leader for the Labour Party, some uncertainty over Brexit and the referendum on EU membership, and, post UKIP, a somewhat muted dialogue over immigration. But what lies in the future for higher education? Have you been paying selective attention over the months leading up to the election? A tuition fee cut may have lodged in your memory, but that was Labour Party policy, and we can forget that now. What does a Conservative government have planned for universities? We know that abolition of the cap on student numbers was already in the offing, as was a national postgraduate loan system for taught masters and PhD courses. That is not the interesting bit. The game-changer is encoded in this section:

“We will ensure that universities deliver the best possible value for money to students: we will introduce a framework to recognise universities offering the highest teaching quality… and require more data to be openly available to potential students so that they can make decisions informed by the career paths of past graduates”.

Value for money: As students pay higher fees, they have been demanding more classroom time, better feedback and higher-grade campus facilities. These have largely been delivered. The focus is now shifting to what the government recognises as value for money – are students learning anything worthwhile? For the last twenty years, as universities have come under greater pressure to justify (especially non-vocational) courses, we have been speaking the language of generic or ‘transferable’ skills. At my university, we make the claim that graduates will be able to demonstrate attributes such as information, organisational and communication skills, and also ‘intellectual agility’ defined as, “aptitude for independent, critical thought and rational inquiry, alongside the capacity for analysis and problem-solving in multiple contexts” [NTU Strategic Plan 2010-2015 p. 10]. Now, it looks as if we will have to bolster that claim with actual hard evidence. All in the name of accountability – a call which never fails to extend its dominion.

A framework to recognise universities offering the highest teaching quality: This is what has been flagged as the Teaching REF. The good news is, at last we have found a use for learning outcomes. It was probably inevitable that as soon as we were obliged to position a degree course as merely an accumulation of learning outcomes, and higher education as a set of generic skills, that one day, this would be called to account. Even better news, demonstrating “the highest teaching quality” is likely to shine a spotlight on students, not on academics, who usually occupy the panopticon. The generation of students who have become used to earning reputational credit for their primary and secondary schools via SATS tests, may now be called forth in the service of their universities. And so, step forward the concept of ‘learning gain’. Simply put, can our graduates demonstrate the nominated transferable skills to a greater extent than those without the benefit of higher education?

There has already been progress made towards measurement of these generic skills, and two parties engaged in this discussion are the OECD (Organisation for Economic Co-operation and Development) and HEFCE (Higher Education Funding Council for England).

The OECD’s AHELO (Assessment of Higher Education Learning Outcomes) has been piloted. The long-term intention is to assess subject learning outcomes, but so far, this has only been attempted for engineering and economics. Generic skills are better candidates for assessment, not least because there already exists an ‘instrument’ to do this: the Collegiate Learning Assessment (CLA). This test aims to “to evaluate the critical-thinking and written-communication skills of college students. It measures analysis and problem-solving, scientific and quantitative reasoning, critical reading and evaluation, and critiquing argument, in addition to writing mechanics and effectiveness.” It uses scenario-based problems and requires students to marshal evidence, and then evaluate the risks or merits of particular solutions or options. It is assumed that this will reflect on “instructional effectiveness” and enable a better understanding of “the teaching-learning interplay”.


Whether or not we will adopt this particular testing mechanism in the UK, as a result of the Tories’ demand for accountability, is not yet clear. The CLA is one of the options being considered, but HEFCE is also currently consulting on the concept of ‘learning gain’.

“We wish to build better ways of capturing excellent educational outcomes, including new approaches that measure students’ learning gain, and of refining existing indicators of students’ learning experiences and progression to employment or further study”. []

The executive summary of the OECD report details some caveats. It mentions a risk that AHELO data could be used as yet another a ranking tool, and emphasises that this was never the intention. I think, though, we can write the script on this one if (when) ‘learning gain’ assessment goes ahead. It will, of course, be employed in a culture inflected by neoliberal politics which requires markets, competition and hierarchies. It can only end one way. And so we can anticipate the second fear listed in the report – that the assessment will become the basis for (re)allocating resources, either from research to teaching, or from apparently unsuccessful universities to those with more ‘intellectually agile’ students. The only silver lining to this cloud is that I suspect many of the most intellectually agile, creative students may be found outside the more favoured universities.



academic argument

No sooner has the largesse of REF2014 been promised than invitations to the next UK academic Olympiad are being issued. The prospect of REF2020 has unleashed a new university obsession with rankings and evaluations of research. In many departments, researchers are undergoing a process of continual audit. Every grant application is vetted; every publication is read by internal, and perhaps external, assessors. In the current system, grades are assigned 1-4, but everyone knows that the stakes have been raised and only grades 3 and 4 count. Let’s take a step back and ponder why this might not be such a good idea. Any such strategy of continuous assessment of research is bound to be inaccurate. Here are some reasons why:

  • We do not know what rules are going to be applied in 2020, but they are sure to be different from 2014. Who foresaw the insertion of the ‘impact’ monkey-wrench at the last minute? Similarly, we cannot assume the current weighting given to publications in the next REF formula.
  • Published work has already been assessed by the most appropriate scholars in the field during the process of peer review. In its ideal form, this is what we describe to students as ‘formative assessment’.  What added value could a summative assessment from a generalist offer?  It is ironic at a time when we have attempted to curb the effect of high-stakes assessments on our students, in the realisation that such exercises are of limited value in ‘the real world’, that we now impose those same stressors on ourselves.
  • Any internal ranking is likely to be distorted either by relations of power within a department, or by personal animosities. Can a Reader offer a fearless appraisal of the work of a Dean? Can a colleague overlooked for promotion dispassionately review the work of someone whose success they envy?
  • I think we all now know that there were some very strange GPAs and rankings emerging from REF2014. Biological sciences appear to have won the laurels, and over half the units submitted achieved a GPA of 3 or above. The sociology panel, meanwhile, delivered a harsher and more partisan verdict. Those units which reflected government priorities in their impact studies scored well; the more ‘critical’ areas of the subject were slammed. Internationally recognised departments and research groups can now legitimately be targeted for closure by their university’s management. And so a vindictive government has made vice-chancellors their willing proxies, simply by persuading them to swap evaluation of research quality for evaluation by £££££.
  • Because of 4), rankings of units, and therefore of publications, is likely to follow one of two courses: scores will either reflect a privileging of government/ managerial bias against critical scholarship, or, every single subject will look at the celebrity status of biological sciences and conspire to inflate their scores in order to attain parity of esteem.

Ranking of publications is also undesirable because of the distorting effect it will have on academic work and relationships:

  • It is hard enough for some scholars to submit their work to peer review and public scrutiny in the first place. If we have to contemplate the scorn of a head of department, or senior colleague – why bother? I imagine a lot of people will allow other agendas – teaching enhancement, student experience, employability to occupy them instead.
  • If senior researchers – likely to be the most ‘productive’ scholars – are the ones doing all the assessing and ranking of pieces, how will they have time to pursue their own work? Inevitably, this will eat up much of their research time, and also a great deal of their ‘leisure’ time. The research capacity of many departments will begin to ebb away under the strain of constant audit.
  • People will become secretive about publication plans, or leave publication until the last possible moment before the next REF. They may perseverate on the minimum 4 publications, and allow other ideas to dissipate unfulfilled and unexplored. This is unhealthy for the future of university research.
  • We risk creating departments of people who despise and distrust each other. How long will Dr A hold a grudge against Professor B for a Grade 2 assessment? Will payback come when Dr A is asked to observe Professor B’s teaching?

So, it is very clear that the value of assessing work either pre- or during REF is unreliable. It is retained simply for its disciplinary effect, and because management think they can summon up research quality like a genie from a lamp. The analogy is pertinent, because they seem to imagine that a ranking score equals quality-made-visible, and that this is some pathway to quality enhancement. It will be the reverse, of course, especially when we consider the amount of research time and funding bound to be sacrificed in the process. And on top of this loss, we will create unhappy, dysfunctional departments where every colleague nurtures resentment. UK academics have seen their job satisfaction and self-esteem falling low enough. Let’s not compound the issue with another ill-founded exercise. This is hysteria masquerading as rationality. Just make it stop.

Critical university studies, discourse and managerialism