The Stern Review has appeared and HE wonks are rushing back to the laptops they had just powered down for the briefest of summer holidays. So here are my thoughts, based on a skim reading of the report, and also the helpful comments on Twitter from the following: Emilie Whitaker @Dr_EmWhitaker , Steven Jones @StevenJones_MCR David Wright @WrightDW Daniel Grey @djrgrey Mike Ratcliffe @mike_rat and James Wilsdon @jameswilsdon.
It is clear that Stern endorses the continuation of the REF. This is no mutinous denunciation of research assessment. But then that was never in the terms of reference with its forecast of ‘future iterations of the REF’. He has, though, proposed fixes to escalating costs, institutional gaming and some of the impediments to providing a true picture of UKHE research, such as requiring a demanding output-impact case study ratio and ‘selectivity’ of individuals submitted for assessment. There is also a welcome focus on support for interdisciplinarity which many have felt was overlooked in REF2014. Derek Sayer’s important criticism that the REF is a parochial and navel-gazing exercise is addressed; at last, international researchers are to be invited onto panels (Para 109).
One of the stated imperatives of Stern is to reduce the workload of the exercise. He recommends a strong reduction in the average number of ‘outputs’ submitted per faculty member (Para 69) which at this point is looking like a baseline of 2, or even fewer. There will still be sampling of outputs by unit of assessment (Para 71). One more priority is to reduce gaming of the system, and to this end Stern has recommendations about who owns ‘outputs’, and an end to ‘selectivity’.
The most radical of the recommendations in terms of its implications for current practices is that there should be no portability of research ‘outputs’. Stern feels that they should remain returnable by the HEI which ‘helped to produce’ them. This raises the issue of who owns research ‘outputs’ and is guaranteed to cause indignation among the more mobile academics, and also Early Career Researchers who may have had a series of short-term contracts at different universities. It would, however, reduce the likelihood that avaricious institutions would poach individual researchers if they were entitled only to ‘outputs by the individual that have been accepted for publication after joining the institution’ (Para 74). There is no doubt that this will hit the perceived ‘marketability’ of individual academics, but perhaps Stern has rather harshly assumed that the only reason academics leave universities is because of their own ‘rent-seeking behaviour’.
Recommendation 1 is that all research-active staff should be returned in the REF, noting that ‘exclusion and the associated stigma are being driven by factors that are not wholly related to the quality of an individual’s research contributions and potential’ (Para 64). While this is commendably inclusive, and recognises that excellence in research may be found anywhere, it might also act as an incentive for universities to reassign staff to career-limiting teaching-only contracts. This fear is mitigated by the recognition that the proposed TEF looks favourably on research-led teaching (Para 112). Like other high-level communications and policy documents, the Stern Review also presupposes the implementation of the TEF, even though (in July 2016) this proposal is still being debated in parliament.
There seems to be some sanity at last on the notion of research impact. This should be based more generally on research activity, and not dependent on particular ‘outputs’. Also, assessment of impact need not be confined to socio-economic impact as it was for REF2014. Stern recommends that it ‘should also include impact on government policy, on public engagement and understanding, on cultural life, on academic impacts outside the field, and impacts on teaching’ (Recommendation 7).
From my reading, I felt my antenna twitch at a couple of potential Trojan Horses. Stern allows scope for the use of metrics, while cautiously invoking James Wilsdon’s ‘Metric Tide’ report of a year ago. Once metrics are allowed to stand as proxies for the health of a unit, then inevitably this lends permission for greater infiltration.
Indeed, the primary way the research environment will be assessed will be via metrics (Para 48). It seems perverse not to allow for a qualitative statement in this particular area. One wonders how an environmental template based on metrics will facilitate the ambition to capture ‘the contribution that its academics make to the wider academy (‘academic citizenship’)’ (Para 88). Recommendation 4 states that panels should continue to assess on the basis of peer review but may supplement this with metrics, as long as they are transparent about their use (Para 105).
Sadly, there is no recommendation to commit institutions to become signatories of the San Fransisco Declaration on Research Assessment. HEFCE is currently a signatory, although in a rather insouciant way. Instead, Stern invokes an aspiration to standardise the metrics supplied to various bodies whose incompatibility currently stands in the way of more widespread use. There seems to be an invitation to UKRI to become the overlord in this integration of research metrics (Para 107). Will they, too, sign DORA?
The REF needed a stern review, and it has got it. It offers enough clarity to subdue even the most feverish of vice-chancellors for a weekend. It seems to be informed by principles of parsimony and lean management in contrast to the speculative profligacy of the White Paper’s TEF. If we must live with the REF, an ‘iteration’ whose more rapacious and distorting inclinations have been curtailed is preferable.
Not sure why you would think us signing DORA was ‘insouciant’. Entirely consistent with explicit policy over several assessment exercises that content of outputs is what matters, not where it is published. It is something we feel strongly about, the opposite of insouciant. @DSweeneyHEFCE
LikeLike
I appreciate the principle of content of academic work prevails. There are a number of us who wish the other principles of DORA were being observed: no use of metrics in promotion or hiring decisions. These breaches are now routine in UK universities, and Hefce, and other funding bodies, could act to curtail it by demanding compliance. I hope we can count on.your support in this?
LikeLike