Earlier this month (July 2015), the Times Higher announced the news that England will not be taking part in an OECD project to make Pisa-style international comparisons of graduates’ learning gain. This came as a surprise, given the priority this Conservative government has placed on learning gain and on a Teaching Excellence Framework.
To those of us following the debate, the OECD’s AHELO (Assessment of Higher Education Learning Outcomes) project seemed like the only game in town. Now, apparently we are back to the drawing board on what measures may constitute learning gain and the TEF (see previous post for critique).
The story was followed up in The Guardian on 28th July.The first thing that struck me was what seemed like a collective sigh of relief by the nation’s vice-chancellors and other HE worthies. This from a group of people who have never met a ‘tool’ they couldn’t beat their staff with. Except this time, the ‘tool’ could not be gamed, or weaponized against academics. It threatened, instead, to appraise the qualities of England’s graduates, and clearly there is an insecurity about how they might hold up in international comparison. After all, the much vaunted REF has managed to avoid any kind of international evaluation, so why not the TEF? Perhaps the vice-chancellors had read a recent report from HEPI indicating that students from other countries work harder than home students. Or perhaps they had also seen pictures of students in Guinea studying outside under the floodlights at the airport, and shuddered at the comparison with pictures of the more bacchanalian evening adventures of UK students.
Maybe they had also sensed disapproval from the Higher Education Minister as he diagnosed grade inflation among the escalating numbers of First and 2.1 degrees awarded. But in any case, this was an unexpected retreat after we had listened for months to all the sermonizing about empowering students to demand better teaching and make universities accountable for the public spend.
Whatever had made the VCs and their friends anxious, it is clear that the TEF may be charting new and uncomfortable ground. UK universities have become really adept at process – what is known lower down the ranks as ‘quality bollocks’. Do your learning outcomes show a hierarchy of cognitive domains? Do they line up with your modes of assessment? Yes, indeed they do, and it has nothing whatsoever to do with teaching quality or learning gain. After several years of this, I am now giving master classes in this to my US colleagues as they enter a new era of fictitious ‘assessment’. In the UK, we have process down to a fine art, but engagement is less of a triumph. This we need to address, since nobody ever signed on to a university course because its quality management processes were optimized in strategic alignment.
If engagement means active, enthusiastic participation in a course leading to measurable learning outcomes (precis of a definition offered in an HEA document) we have a problem in UK universities. Lecturers complain that attendance is poor on many courses and that students fail to prepare for seminars or read outside of class. There is a culture of permitting students with failed modules to progress to the next level under rules of ‘compensation’. I understand that most students have other responsibilities such as paid work, but still, many graduate knowing they would have got more out of their courses if they had tried harder. This, I think is one reason for the reluctance to enter into international comparisons of learning gain and transferable skills.
But none of this features among the justifications recorded in the Guardian article. On the question of AHELO (Assessment of Higher Education Learning Outcomes), Peter Williams, former CEO of the Quality Assurance Agency is quoted in the hard copy of the article, “I thought the whole thing was a nonsense”. Alison Wolf, of King’s College, London, offers a more cogent evaluation, “It is basically impossible to create complex language-based test items which are comparable in difficulty when translated into a whole lot of different languages”. Comparability is more likely to depend on culture than on language, so given the constituency of the OECD, we can probably reject that claim.
My understanding of the AHELO assessment was that it followed the kind of reasoning, problem-based scenarios presented in the CLA (Collegiate Learning Assessment), so I turned to my colleague Louise Cummings for a view. Louise – a linguist and philosopher – pretty much dominates the field of informal logic and scientific reasoning under conditions of uncertainty (Cummings 2015). Would performance on these tasks be affected by translation from one language to another? No. A firm, no.
So, as I write, the University Alliance (the ‘mission group’ which used to be for ‘business-oriented universities, but now claims “we are Britain’s universities for cities and regions. We believe in making the difference across everything we do”) is holding a day event to discuss the TEF. I’m sure they would prefer to find a way of slipping back into their comfort zone of process-oriented subject review. But if we have to have a TEF, it should be more like the US NSSE (National Survey of Student Engagement) which would put the emphasis where it belongs: on student engagement, and on “how the institution deploys its resources and organizes the curriculum and other learning opportunities to get students to participate”. I’m not making great claims for this particular instrument, but something similar might give us a picture of how to marry teaching effectiveness and student learning measures.
Cummings, Louise. 2015. Reasoning and Public Health: New Ways of Coping with Uncertainty. London: Springer.