I blogged previously about learning gain just after the General Election.
At that point there was not much to go on but a hazy promise in the Conservative manifesto to “introduce a framework to recognise universities offering the highest teaching quality… and require more data to be openly available to potential students so that they can make decisions informed by the career paths of past graduates”. On July 1st 2015, Jo Johnson added some more details; the teaching REF was intended to be outcomes focussed, in other words, not focussed on the kinds of ‘quality’ processes that universities had perfected over the last 20 years. The Conservative government hoped to devise a test of learning outcomes that, anticipating the skills of the best schemers in universities, would not be open to gaming.
In the Times Higher on 23rd July 2015, Julia King, Vice-Chancellor of Aston University, made this comment about the Teaching Excellence Framework (TEF):
First, it needs to measure the right things. It cannot be a superficial extension of the data provided through the Key Information Set, a variant on the Quality Assurance Agency’s higher education review or some rehash of the subject league tables that drive universities to offer higher and higher proportions of firsts and 2:1s.
Too right, and while I don’t agree with all her conclusions, I appreciate her drive to ensure that the TEF is based on “a properly evidenced measure of that quality”.
It was disappointing then, in the same week, to read a guest blog hosted on Wonkhe from Edward Peck, Vice-Chancellor of Nottingham Trent University. The piece started with a critique of the validity of current DLHE (Destinations of Leavers from Higher Education) data, and I became hopeful of a rebuttal of some of the cruder measures of graduate success. However, in the piece, entitled Finding New Ways to Measure Graduate Success, Peck explicitly rejects measures which, while not the sole determinants of teaching quality, might at least impact positively on the student experience. On the list are Staff Student Ratios and spend per student, which he regards as perverse, and rewarding inefficiency. As I read on, disenchantment grew:
“Secondly, the forthcoming availability of HMRC tax data to HESA and the Student Loans Company means that we could use a robust measure where we can select the census point at which we present data on average earnings by university and/or by course. This would not be dissimilar to the approach some rankings take to MBA programmes. With secondary education performance data also being brought into the mix, we have the hope of finding a much needed way to measure added value or learning gain”.
Now, I’m all in favour of looking at learning gain and allowing that to inform improvements to the student experience. I’m not in favour of releasing irrelevant and crude data in a league table. I’m still pondering the author’s logic when he presents graduate salary levels as an indicator of ‘value added’ by universities, and appears to see this as a proxy measure for learning gain. Universities are repositories and generators of research and knowledge, not factories for ‘salary men’. I would have hoped the nation’s vice-chancellors would have challenged those assumptions, not propounded them.
Here are five reasons why the equation of graduate salary, teaching quality and learning gain is unfounded.
- The glass floor effect. On July 26th The BBC news lead story was that middle class families are able to prevent their children from sliding down the social scale, regardless of talent. Clearly, then, graduate salaries are more likely to correlate with social class.
- The continued existence of a gender pay gap underlines how fanciful it is to assume that a high salary results from ‘value added’ higher education. Have only male graduates received better teaching? Data from the Fawcett Society indicates a pay gap of 19.1%. Admittedly, this figure encompasses pay for all workers, not just graduates, but points about the motherhood penalty, and outright discrimination are still valid, even in universities. Talk of perverse incentives! Would universities accept more white, able-bodied, middle class males onto courses, if they were likely to earn higher salaries?
- Economies are not stable. Middle class professionals continue to feel the force of austerity caused by bankers’ irresponsibility. ‘Generation rent’ may never attain the standard of living of their parents, but this has resulted from a failure of the generational compact, not the failure of universities.
- Similarly, we cannot predict which subject areas may prove lucrative. Recently, graduates of any discipline who work in finance have received higher salaries, while pay and employability for graduates in IT and bioscience have taken a downturn.
- The SBEE (Small Business, Enterprise and Employment Act 2015) has unleashed a huge data salad of graduate earnings, student loan repayment and course/ university attended. . At best, there may be an association between your earnings and your alma mater. This raises the question of construct validity – a notion drilled into all educators – make sure you have the appropriate measure in order to draw conclusions. Association of any set of measures does not indicate causality, so Peck’s suggested metrics are not even valid as proxy measures of learning gain.
Those of us who care about the future of higher education in the UK cannot let these lazy assumptions dominate the agenda of academic institutions. The data points may link up, courtesy of the SBEE, but the logic does not. It would be ironic if we allowed universities, of all places – interrogators of assumptions, busters of myths and challengers of fallacies – to be led by spurious metrics for what will probably turn out to be immeasurable.
3 thoughts on “How not to measure teaching quality and learning gain”