It’s Metricide – Don’t Do It

Demand for universities to release more and more data from their students is growing. Thus far, it has focussed on the academic quality of teaching and research. A new departure is to measure universities on the salaries of their graduates. Liz Morrish gives the background.

If you thought that feeding the audit vultures with data for “better performance metrics in higher education” would placate them, you would be wrong. The last blog discussed a small section of the Conservative manifesto (see last blog post 23rd May). Let’s return to it, because there is still an unexamined proposition – to “require more data to be openly available to potential students so that they can make decisions informed by the career paths of past graduates”. Once again, the focus is on graduate outcomes, but this time they are entirely financial. This truly sinks the last nail in the coffin of higher education as intellectual self-fulfilment. The manifesto anticipates a little-publicized outcome of the recently passed Small Business, Enterprise and Employment Bill 2015 (SBEE) 

Here’s a quick summary of the purpose and intent of this new legislation. First, to put it in context, let’s take a step back to the 2010 Browne Review which recommended a tripling of university fees in England. Lord Browne, the Labour government which commissioned the review, and the coalition government which acted on it, never imagined that ALL universities would charge the £9000 maximum fee. They assumed they had tweaked the right drivers and incentives to ensure that fees would mirror university reputation and thus perceived ‘value for money’. Instead of forming an orderly hierarchy, universities all moved to charge the maximum, or near to it, for fear of signalling an inferior ‘product’. This has landed the government with a far bigger outlay on student loans that they ever intended or budgeted for. There is now a very large hole in the balance sheet for the Department of Business Innovation and Skills, and to make matters worse, the RAB charge (Resource Accounting and Budgeting) seems to grow with each new estimate (the RAB charge is the portion of student borrowing that will not be paid back). This may be in excess of 45% of the sum borrowed, and is a debt which BIS will need to service.

And so, having vastly inflated the actual public spend on higher education, albeit through the agency of student borrowers, BIS and the government need to find a way to make this improvident model sustainable. Taking an idea from President Obama, one way to reclaim the money is by ensuring that graduate salaries exceed the threshold for repayment. This is no easy deal in the current economic climate, and so the government’s ‘nudge’ unit must have been employing all their imagination towards their solution – the FEER. It stands for the Future Earnings and Employment Record. In an era of big data, it has become possible to link the following records to individuals: university attended (and possibly even subject studied), amount owed in tuition and maintenance loans, and, via HMRC tax records, the amount that a graduate currently earns. This intrusive leakage is now permissible since the passing of the SBEE. BIS can simply ask for these records in order to compile them into a league table of graduate loan repayments, by university. What better way to weaponize that data than to try and influence student choices, cast as ‘aspirations’ in the legislative text, or even punish universities for having the temerity to confer degrees on deadbeats who cannot repay their loans?

You know you’re in trouble when the discourse turns to ‘journeys’ and ‘destinations’, but it gets worse. Although the government’s Education Evaluation fact sheet constitutes a total failure of logic, it displays a discursive masterstroke,  with a chaining of ‘learning outcomes’, ‘performance data’, ‘accountability’, ‘interventions’, and then serving the whole salad up as a solution to ‘social mobility’. And the final section re-designates universities as mere factories for the production of labour inputs: “This data, presented in context, will distinguish universities that are delivering durable labour market outcomes and a strong enterprise ethos for their students”.

So, that is the future. Applicants for university courses will be invited to consider Key Information Sets including projected earnings, and make their choices accordingly. More worryingly, will universities fear the FEER to the extent they will discriminate against women – seriously at risk of defaulting on loans with all those inconveniently timed maternity leaves? Will universities continue to offer course that lead to lower-paying graduate jobs: nursing, teaching, fine arts? Who knows what knowledge and skills may be in demand in ten years’ time? Having sacked off excellent chemistry, physics, zoology and sociology departments, universities are full of forensic science, criminology and equestrian studies courses. These are all popular ‘vocational’ subjects, but lead to mixed outcomes in terms of employability and earnings. Meanwhile, graphic artists and English graduates seem to be climbing the salary scales with recent developments in computer games.

We can only hope that students are not as mercenary as their political masters. Students, I imagine, will continue to make choices based on love for their chosen subject, desire to remain in their home city, or to move to a new one, to attend a university to be with their current partner, or the one that gives them the opportunity to study overseas – or any of the countless other factors that motivate student choices. I may be institutionalized in an arts and humanities faculty, but I have met very few students in any university I have visited who had graduate salary as top of their aspirations. But, then, the government thinks I live in an ivory tower, insulated from economic realities. Nevertheless, I’d bet my perception of students is more accurate than either their initial estimate of the RAB charge or their strategy for retrieving it.


My Loss is Your Learning Gain

Liz Morrish discusses some new ways the Conservative government will seek to assess and rank universities. ‘Learning gain’ is about to be ‘a thing’.


It is just over two weeks after the General Election, and our thoughts turn to the prospect of more cuts in public spending, a new leader for the Labour Party, some uncertainty over Brexit and the referendum on EU membership, and, post UKIP, a somewhat muted dialogue over immigration. But what lies in the future for higher education? Have you been paying selective attention over the months leading up to the election? A tuition fee cut may have lodged in your memory, but that was Labour Party policy, and we can forget that now. What does a Conservative government have planned for universities? We know that abolition of the cap on student numbers was already in the offing, as was a national postgraduate loan system for taught masters and PhD courses. That is not the interesting bit. The game-changer is encoded in this section:

“We will ensure that universities deliver the best possible value for money to students: we will introduce a framework to recognise universities offering the highest teaching quality… and require more data to be openly available to potential students so that they can make decisions informed by the career paths of past graduates”.

Value for money: As students pay higher fees, they have been demanding more classroom time, better feedback and higher-grade campus facilities. These have largely been delivered. The focus is now shifting to what the government recognises as value for money – are students learning anything worthwhile? For the last twenty years, as universities have come under greater pressure to justify (especially non-vocational) courses, we have been speaking the language of generic or ‘transferable’ skills. At my university, we make the claim that graduates will be able to demonstrate attributes such as information, organisational and communication skills, and also ‘intellectual agility’ defined as, “aptitude for independent, critical thought and rational inquiry, alongside the capacity for analysis and problem-solving in multiple contexts” [NTU Strategic Plan 2010-2015 p. 10]. Now, it looks as if we will have to bolster that claim with actual hard evidence. All in the name of accountability – a call which never fails to extend its dominion.

A framework to recognise universities offering the highest teaching quality: This is what has been flagged as the Teaching REF. The good news is, at last we have found a use for learning outcomes. It was probably inevitable that as soon as we were obliged to position a degree course as merely an accumulation of learning outcomes, and higher education as a set of generic skills, that one day, this would be called to account. Even better news, demonstrating “the highest teaching quality” is likely to shine a spotlight on students, not on academics, who usually occupy the panopticon. The generation of students who have become used to earning reputational credit for their primary and secondary schools via SATS tests, may now be called forth in the service of their universities. And so, step forward the concept of ‘learning gain’. Simply put, can our graduates demonstrate the nominated transferable skills to a greater extent than those without the benefit of higher education?

There has already been progress made towards measurement of these generic skills, and two parties engaged in this discussion are the OECD (Organisation for Economic Co-operation and Development) and HEFCE (Higher Education Funding Council for England).

The OECD’s AHELO (Assessment of Higher Education Learning Outcomes) has been piloted. The long-term intention is to assess subject learning outcomes, but so far, this has only been attempted for engineering and economics. Generic skills are better candidates for assessment, not least because there already exists an ‘instrument’ to do this: the Collegiate Learning Assessment (CLA). This test aims to “to evaluate the critical-thinking and written-communication skills of college students. It measures analysis and problem-solving, scientific and quantitative reasoning, critical reading and evaluation, and critiquing argument, in addition to writing mechanics and effectiveness.” It uses scenario-based problems and requires students to marshal evidence, and then evaluate the risks or merits of particular solutions or options. It is assumed that this will reflect on “instructional effectiveness” and enable a better understanding of “the teaching-learning interplay”.


Whether or not we will adopt this particular testing mechanism in the UK, as a result of the Tories’ demand for accountability, is not yet clear. The CLA is one of the options being considered, but HEFCE is also currently consulting on the concept of ‘learning gain’.

“We wish to build better ways of capturing excellent educational outcomes, including new approaches that measure students’ learning gain, and of refining existing indicators of students’ learning experiences and progression to employment or further study”. []

The executive summary of the OECD report details some caveats. It mentions a risk that AHELO data could be used as yet another a ranking tool, and emphasises that this was never the intention. I think, though, we can write the script on this one if (when) ‘learning gain’ assessment goes ahead. It will, of course, be employed in a culture inflected by neoliberal politics which requires markets, competition and hierarchies. It can only end one way. And so we can anticipate the second fear listed in the report – that the assessment will become the basis for (re)allocating resources, either from research to teaching, or from apparently unsuccessful universities to those with more ‘intellectually agile’ students. The only silver lining to this cloud is that I suspect many of the most intellectually agile, creative students may be found outside the more favoured universities.


academic argument

No sooner has the largesse of REF2014 been promised than invitations to the next UK academic Olympiad are being issued. The prospect of REF2020 has unleashed a new university obsession with rankings and evaluations of research. In many departments, researchers are undergoing a process of continual audit. Every grant application is vetted; every publication is read by internal, and perhaps external, assessors. In the current system, grades are assigned 1-4, but everyone knows that the stakes have been raised and only grades 3 and 4 count. Let’s take a step back and ponder why this might not be such a good idea. Any such strategy of continuous assessment of research is bound to be inaccurate. Here are some reasons why:

  • We do not know what rules are going to be applied in 2020, but they are sure to be different from 2014. Who foresaw the insertion of the ‘impact’ monkey-wrench at the last minute? Similarly, we cannot assume the current weighting given to publications in the next REF formula.
  • Published work has already been assessed by the most appropriate scholars in the field during the process of peer review. In its ideal form, this is what we describe to students as ‘formative assessment’.  What added value could a summative assessment from a generalist offer?  It is ironic at a time when we have attempted to curb the effect of high-stakes assessments on our students, in the realisation that such exercises are of limited value in ‘the real world’, that we now impose those same stressors on ourselves.
  • Any internal ranking is likely to be distorted either by relations of power within a department, or by personal animosities. Can a Reader offer a fearless appraisal of the work of a Dean? Can a colleague overlooked for promotion dispassionately review the work of someone whose success they envy?
  • I think we all now know that there were some very strange GPAs and rankings emerging from REF2014. Biological sciences appear to have won the laurels, and over half the units submitted achieved a GPA of 3 or above. The sociology panel, meanwhile, delivered a harsher and more partisan verdict. Those units which reflected government priorities in their impact studies scored well; the more ‘critical’ areas of the subject were slammed. Internationally recognised departments and research groups can now legitimately be targeted for closure by their university’s management. And so a vindictive government has made vice-chancellors their willing proxies, simply by persuading them to swap evaluation of research quality for evaluation by £££££.
  • Because of 4), rankings of units, and therefore of publications, is likely to follow one of two courses: scores will either reflect a privileging of government/ managerial bias against critical scholarship, or, every single subject will look at the celebrity status of biological sciences and conspire to inflate their scores in order to attain parity of esteem.

Ranking of publications is also undesirable because of the distorting effect it will have on academic work and relationships:

  • It is hard enough for some scholars to submit their work to peer review and public scrutiny in the first place. If we have to contemplate the scorn of a head of department, or senior colleague – why bother? I imagine a lot of people will allow other agendas – teaching enhancement, student experience, employability to occupy them instead.
  • If senior researchers – likely to be the most ‘productive’ scholars – are the ones doing all the assessing and ranking of pieces, how will they have time to pursue their own work? Inevitably, this will eat up much of their research time, and also a great deal of their ‘leisure’ time. The research capacity of many departments will begin to ebb away under the strain of constant audit.
  • People will become secretive about publication plans, or leave publication until the last possible moment before the next REF. They may perseverate on the minimum 4 publications, and allow other ideas to dissipate unfulfilled and unexplored. This is unhealthy for the future of university research.
  • We risk creating departments of people who despise and distrust each other. How long will Dr A hold a grudge against Professor B for a Grade 2 assessment? Will payback come when Dr A is asked to observe Professor B’s teaching?

So, it is very clear that the value of assessing work either pre- or during REF is unreliable. It is retained simply for its disciplinary effect, and because management think they can summon up research quality like a genie from a lamp. The analogy is pertinent, because they seem to imagine that a ranking score equals quality-made-visible, and that this is some pathway to quality enhancement. It will be the reverse, of course, especially when we consider the amount of research time and funding bound to be sacrificed in the process. And on top of this loss, we will create unhappy, dysfunctional departments where every colleague nurtures resentment. UK academics have seen their job satisfaction and self-esteem falling low enough. Let’s not compound the issue with another ill-founded exercise. This is hysteria masquerading as rationality. Just make it stop.

Critical university studies, discourse and managerialism