Tag Archives: Learning gain

While you were away…Summer 2015 HE news catch up – Part I

Welcome back. That’s assuming you had a holiday in the first place. In case you missed them, here are some of the issues which have emerged since the UK General Election in May (remember that?).

University teaching

You will probably be aware that student number controls have been relaxed from this September. You might imagine this would signal the government’s huge confidence in the university sector. However, the Minister for Higher Education, Jo Johnson, made a speech to the Universities UK Annual Conference on Wednesday 9th September 2015. A useful Bird-and-Fortune-style commentary on the speech can be accessed here. In the speech he announced,

“there is lamentable teaching that must be driven out of our system. It damages the reputation of UK higher education and I am determined to address it”.

This pronouncement reprised some of the themes from his speech at the same venue on July 1st, particularly the idea of a Teaching Excellence Framework.

This was the speech where some additional information was added to the Conservative Manifesto promise to “introduce a framework to recognise universities offering the highest teaching quality”, but we have still to hear what format it might take, and despite much discussion in the media, the focus is still not clearly defined. Nick Hillman has suggested there are three possible candidates: the familiar Quality Assurance process, National Student Survey scores or a measure of ‘learning gain’. Or a mixture of all three, with DLHE (Destinations of Leavers from Higher Education) data thrown in as well.

This is as much as we know so far, from the July 1st speech:

“I expect the TEF to include a clear set of outcome-focused criteria and metrics. This should be underpinned by an external assessment process undertaken by an independent quality body from within the existing landscape”. [http://www.wonkhe.com/blogs/back-to-school-with-jo-johnson/]

The only thing that has been asserted with any clarity is that any increase in the tuition fee will be linked to an institution’s performance in the TEF. In doing so, the government could, at last, claim a success in creating a market in higher education ‘providers’. As we know, the post 2012 reforms had led to a rush for all universities to charge the maximum £9000, or near to it. But now vice-chancellors, particularly in the Russell Group, are lobbying for a fee hike. This could mean some strange incentives arising. If students know that their positive NSS scores will result in a tuition increase for their successors, will they seek to skew their responses, and the university’s league table position, downwards?

Could the TEF resemble a QAA style subject review format? It is hard to imagine that the tried and tested, despised but thoroughly gameable process, will not emerge in some new form. The usual promises have been made, that it would be light touch, and it would need to be, considering the £5 currently spent on QA. However, the QAA was process-focussed, and Jo Johnson has been clear that he wants the emphasis to be on ‘outputs’. In that case, what might it measure?

One candidate is ‘learning gain’ and please see here for a discussion. Simply put, can our graduates demonstrate the nominated transferable skills to a greater extent than before they started higher education? Some commentators talk about ‘distance travelled’.

The OECD just completed a feasibility study into an international comparison of graduates. It seems that European universities are not yet willing to rank their graduates’ learning outcomes against those from other continents. Last week, it was announced that a Europe-only feasibility study will begin: Measuring and Comparing Achievements of Learning Outcomes in Higher Education in Europe project, known as Calohee.

Yet another possibility for the TEF seems to be favoured by Edward Peck, Vice-Chancellor of Nottingham Trent University. He has evidently read this section from the Conservative Manifesto:

“We will ensure that universities deliver the best possible value for money to students: we will introduce a framework to recognise universities offering the highest teaching quality… and require more data to be openly available to potential students so that they can make decisions informed by the career paths of past graduates”.

Peck takes that textual linkage of teaching quality and career paths of graduates and makes a learning gain metric out of them. In a piece entitled “Finding new ways to measure graduate success”, he outlines his view thus:

“[T]he forthcoming availability of HMRC tax data to HESA and the Student Loans Company means that we could use a robust measure where we can select the census point at which we present data on average earnings by university and/or by course. This would not be dissimilar to the approach some rankings take to MBA programmes. With secondary education performance data also being brought into the mix, we have the hope of finding a much needed way to measure added value or learning gain”. [http://www.wonkhe.com/blogs/finding-new-ways-to-measure-graduate-success/]

Added value and learning gain are not the same thing, and neither can be measured by graduate salaries. There seemed further valid points to be made against Peck’s suggestion, and so I did, here.

Learning Gain is changing its shape almost daily, and even HEFCE can’t keep up. The Times Higher reports that a number of English (and it is only English) universities are trialling standardized tests from the Wabash National Study.Lots of luck getting your students to turn out for this battery of 13 different tests which have no direct relevance for them.

Meanwhile, BIS is still busy consulting about what the TEF should look like, and at the same time has commissioned a study by RAND Europe into what learning gain is, and how it might be measured. Spoiler – like me, they don’t seem to favour graduate salaries as a valid measure.

Happy new Academic Year. More soon on Research, Quality Assurance, Student debt, and the road ahead in Part II.

England Rejects the Learning Tower of Pisa ?

Earlier this month (July 2015), the Times Higher announced the news that England will not be taking part in an OECD project to make Pisa-style international comparisons of graduates’ learning gain. This came as a surprise, given the priority this Conservative government has placed on learning gain and on a Teaching Excellence Framework.

To those of us following the debate, the OECD’s AHELO (Assessment of Higher Education Learning Outcomes) project seemed like the only game in town. Now, apparently we are back to the drawing board on what measures may constitute learning gain and the TEF (see previous post for critique).

The story was followed up in The Guardian on 28th July.The first thing that struck me was what seemed like a collective sigh of relief by the nation’s vice-chancellors and other HE worthies. This from a group of people who have never met a ‘tool’ they couldn’t beat their staff with. Except this time, the ‘tool’ could not be gamed, or weaponized against academics. It threatened, instead, to appraise the qualities of England’s graduates, and clearly there is an insecurity about how they might hold up in international comparison. After all, the much vaunted REF has managed to avoid any kind of international evaluation, so why not the TEF? Perhaps the vice-chancellors had read a recent report from HEPI indicating that students from other countries work harder than home students. Or perhaps they had also seen pictures of students in Guinea studying outside under the floodlights at the airport, and shuddered at the comparison with pictures of the more bacchanalian evening adventures of UK students.

students study under airport lights

Maybe they had also sensed disapproval from the Higher Education Minister as he diagnosed grade inflation among the escalating numbers of First and 2.1 degrees awarded. But in any case, this was an unexpected retreat after we had listened for months to all the sermonizing about empowering students to demand better teaching and make universities accountable for the public spend.

Whatever had made the VCs and their friends anxious, it is clear that the TEF may be charting new and uncomfortable ground. UK universities have become really adept at process – what is known lower down the ranks as ‘quality bollocks’. Do your learning outcomes show a hierarchy of cognitive domains? Do they line up with your modes of assessment? Yes, indeed they do, and it has nothing whatsoever to do with teaching quality or learning gain. After several years of this, I am now giving master classes in this to my US colleagues as they enter a new era of fictitious ‘assessment’. In the UK, we have process down to a fine art, but engagement is less of a triumph. This we need to address, since nobody ever signed on to a university course because its quality management processes were optimized in strategic alignment.

If engagement means active, enthusiastic participation in a course leading to measurable learning outcomes (precis of a definition offered in an HEA document)  we have a problem in UK universities. Lecturers complain that attendance is poor on many courses and that students fail to prepare for seminars or read outside of class. There is a culture of permitting students with failed modules to progress to the next level under rules of ‘compensation’. I understand that most students have other responsibilities such as paid work, but still, many graduate knowing they would have got more out of their courses if they had tried harder. This, I think is one reason for the reluctance to enter into international comparisons of learning gain and transferable skills.

But none of this features among the justifications recorded in the Guardian article. On the question of AHELO (Assessment of Higher Education Learning Outcomes), Peter Williams, former CEO of the Quality Assurance Agency is quoted in the hard copy of the article, “I thought the whole thing was a nonsense”. Alison Wolf, of King’s College, London, offers a more cogent evaluation, “It is basically impossible to create complex language-based test items which are comparable in difficulty when translated into a whole lot of different languages”. Comparability is more likely to depend on culture than on language, so given the constituency of the OECD, we can probably reject that claim.

My understanding of the AHELO assessment was that it followed the kind of reasoning, problem-based scenarios presented in the CLA (Collegiate Learning Assessment), so I turned to my colleague Louise Cummings for a view. Louise – a linguist and philosopher – pretty much dominates the field of informal logic and scientific reasoning under conditions of uncertainty (Cummings 2015). Would performance on these tasks be affected by translation from one language to another? No. A firm, no.

So, as I write, the University Alliance (the ‘mission group’ which used to be for ‘business-oriented universities, but now claims “we are Britain’s universities for cities and regions. We believe in making the difference across everything we do”) is holding a day event to discuss the TEF. I’m sure they would prefer to find a way of slipping back into their comfort zone of process-oriented subject review. But if we have to have a TEF, it should be more like the US NSSE (National Survey of Student Engagement) which would put the emphasis where it belongs: on student engagement, and on “how the institution deploys its resources and organizes the curriculum and other learning opportunities to get students to participate”.  I’m not making great claims for this particular instrument, but something similar might give us a picture of how to marry teaching effectiveness and student learning measures.

Cummings, Louise. 2015. Reasoning and Public Health: New Ways of Coping with Uncertainty. London: Springer.

How not to measure teaching quality and learning gain

I blogged previously about learning gain just after the General Election.

At that point there was not much to go on but a hazy promise in the Conservative manifesto to “introduce a framework to recognise universities offering the highest teaching quality… and require more data to be openly available to potential students so that they can make decisions informed by the career paths of past graduates”.  On July 1st 2015, Jo Johnson added some more details; the teaching REF was intended to be outcomes focussed, in other words, not focussed on the kinds of ‘quality’ processes that universities had perfected over the last 20 years. The Conservative government hoped to devise a test of learning outcomes that, anticipating the skills of the best schemers in universities, would not be open to gaming.

In the Times Higher on 23rd July 2015, Julia King, Vice-Chancellor of Aston University, made this comment about the Teaching Excellence Framework (TEF):

First, it needs to measure the right things. It cannot be a superficial extension of the data provided through the Key Information Set, a variant on the Quality Assurance Agency’s higher education review or some rehash of the subject league tables that drive universities to offer higher and higher proportions of firsts and 2:1s.

Too right, and while I don’t agree with all her conclusions, I appreciate her drive to ensure that the TEF is based on “a properly evidenced measure of that quality”.

It was disappointing then, in the same week, to read a guest blog hosted on Wonkhe from Edward Peck, Vice-Chancellor of Nottingham Trent University. The piece started with a critique of the validity of current DLHE (Destinations of Leavers from Higher Education) data, and I became hopeful of a rebuttal of some of the cruder measures of graduate success. However, in the piece, entitled Finding New Ways to Measure Graduate Success, Peck explicitly rejects measures which, while not the sole determinants of teaching quality, might at least impact positively on the student experience. On the list are Staff Student Ratios and spend per student, which he regards as perverse, and rewarding inefficiency. As I read on, disenchantment grew:

“Secondly, the forthcoming availability of HMRC tax data to HESA and the Student Loans Company means that we could use a robust measure where we can select the census point at which we present data on average earnings by university and/or by course. This would not be dissimilar to the approach some rankings take to MBA programmes. With secondary education performance data also being brought into the mix, we have the hope of finding a much needed way to measure added value or learning gain”.

Now, I’m all in favour of looking at learning gain and allowing that to inform improvements to the student experience. I’m not in favour of releasing irrelevant and crude data in a league table. I’m still pondering the author’s logic when he presents graduate salary levels as an indicator of ‘value added’ by universities, and appears to see this as a proxy measure for learning gain. Universities are repositories and generators of research and knowledge, not factories for ‘salary men’. I would have hoped the nation’s vice-chancellors would have challenged those assumptions, not propounded them.

Here are five reasons why the equation of graduate salary, teaching quality and learning gain is unfounded.

  1. The glass floor effect. On July 26th The BBC news lead story was that middle class families are able to prevent their children from sliding down the social scale, regardless of talent. Clearly, then, graduate salaries are more likely to correlate with social class.
  1. The continued existence of a gender pay gap underlines how fanciful it is to assume that a high salary results from ‘value added’ higher education. Have only male graduates received better teaching? Data from the Fawcett Society indicates a pay gap of 19.1%. Admittedly, this figure encompasses pay for all workers, not just graduates, but points about the motherhood penalty, and outright discrimination are still valid, even in universities. Talk of perverse incentives! Would universities accept more white, able-bodied, middle class males onto courses, if they were likely to earn higher salaries?
  1. Economies are not stable. Middle class professionals continue to feel the force of austerity caused by bankers’ irresponsibility. ‘Generation rent’ may never attain the standard of living of their parents, but this has resulted from a failure of the generational compact, not the failure of universities.
  1. Similarly, we cannot predict which subject areas may prove lucrative. Recently, graduates of any discipline who work in finance have received higher salaries, while pay and employability for graduates in IT and bioscience have taken a downturn.
  1. The SBEE (Small Business, Enterprise and Employment Act 2015) has unleashed a huge data salad of graduate earnings, student loan repayment and course/ university attended. . At best, there may be an association between your earnings and your alma mater. This raises the question of construct validity – a notion drilled into all educators – make sure you have the appropriate measure in order to draw conclusions. Association of any set of measures does not indicate causality, so Peck’s suggested metrics are not even valid as proxy measures of learning gain.

Those of us who care about the future of higher education in the UK cannot let these lazy assumptions dominate the agenda of academic institutions. The data points may link up, courtesy of the SBEE, but the logic does not. It would be ironic if we allowed universities, of all places – interrogators of assumptions, busters of myths and challengers of fallacies – to be led by spurious metrics for what will probably turn out to be immeasurable.

teaching quality venn