Tag Archives: DLHE

While you were away…Summer 2015 HE news catch up – Part I

Welcome back. That’s assuming you had a holiday in the first place. In case you missed them, here are some of the issues which have emerged since the UK General Election in May (remember that?).

University teaching

You will probably be aware that student number controls have been relaxed from this September. You might imagine this would signal the government’s huge confidence in the university sector. However, the Minister for Higher Education, Jo Johnson, made a speech to the Universities UK Annual Conference on Wednesday 9th September 2015. A useful Bird-and-Fortune-style commentary on the speech can be accessed here. In the speech he announced,

“there is lamentable teaching that must be driven out of our system. It damages the reputation of UK higher education and I am determined to address it”.

This pronouncement reprised some of the themes from his speech at the same venue on July 1st, particularly the idea of a Teaching Excellence Framework.

This was the speech where some additional information was added to the Conservative Manifesto promise to “introduce a framework to recognise universities offering the highest teaching quality”, but we have still to hear what format it might take, and despite much discussion in the media, the focus is still not clearly defined. Nick Hillman has suggested there are three possible candidates: the familiar Quality Assurance process, National Student Survey scores or a measure of ‘learning gain’. Or a mixture of all three, with DLHE (Destinations of Leavers from Higher Education) data thrown in as well.

This is as much as we know so far, from the July 1st speech:

“I expect the TEF to include a clear set of outcome-focused criteria and metrics. This should be underpinned by an external assessment process undertaken by an independent quality body from within the existing landscape”. [http://www.wonkhe.com/blogs/back-to-school-with-jo-johnson/]

The only thing that has been asserted with any clarity is that any increase in the tuition fee will be linked to an institution’s performance in the TEF. In doing so, the government could, at last, claim a success in creating a market in higher education ‘providers’. As we know, the post 2012 reforms had led to a rush for all universities to charge the maximum £9000, or near to it. But now vice-chancellors, particularly in the Russell Group, are lobbying for a fee hike. This could mean some strange incentives arising. If students know that their positive NSS scores will result in a tuition increase for their successors, will they seek to skew their responses, and the university’s league table position, downwards?

Could the TEF resemble a QAA style subject review format? It is hard to imagine that the tried and tested, despised but thoroughly gameable process, will not emerge in some new form. The usual promises have been made, that it would be light touch, and it would need to be, considering the £5 currently spent on QA. However, the QAA was process-focussed, and Jo Johnson has been clear that he wants the emphasis to be on ‘outputs’. In that case, what might it measure?

One candidate is ‘learning gain’ and please see here for a discussion. Simply put, can our graduates demonstrate the nominated transferable skills to a greater extent than before they started higher education? Some commentators talk about ‘distance travelled’.

The OECD just completed a feasibility study into an international comparison of graduates. It seems that European universities are not yet willing to rank their graduates’ learning outcomes against those from other continents. Last week, it was announced that a Europe-only feasibility study will begin: Measuring and Comparing Achievements of Learning Outcomes in Higher Education in Europe project, known as Calohee.

Yet another possibility for the TEF seems to be favoured by Edward Peck, Vice-Chancellor of Nottingham Trent University. He has evidently read this section from the Conservative Manifesto:

“We will ensure that universities deliver the best possible value for money to students: we will introduce a framework to recognise universities offering the highest teaching quality… and require more data to be openly available to potential students so that they can make decisions informed by the career paths of past graduates”.

Peck takes that textual linkage of teaching quality and career paths of graduates and makes a learning gain metric out of them. In a piece entitled “Finding new ways to measure graduate success”, he outlines his view thus:

“[T]he forthcoming availability of HMRC tax data to HESA and the Student Loans Company means that we could use a robust measure where we can select the census point at which we present data on average earnings by university and/or by course. This would not be dissimilar to the approach some rankings take to MBA programmes. With secondary education performance data also being brought into the mix, we have the hope of finding a much needed way to measure added value or learning gain”. [http://www.wonkhe.com/blogs/finding-new-ways-to-measure-graduate-success/]

Added value and learning gain are not the same thing, and neither can be measured by graduate salaries. There seemed further valid points to be made against Peck’s suggestion, and so I did, here.

Learning Gain is changing its shape almost daily, and even HEFCE can’t keep up. The Times Higher reports that a number of English (and it is only English) universities are trialling standardized tests from the Wabash National Study.Lots of luck getting your students to turn out for this battery of 13 different tests which have no direct relevance for them.

Meanwhile, BIS is still busy consulting about what the TEF should look like, and at the same time has commissioned a study by RAND Europe into what learning gain is, and how it might be measured. Spoiler – like me, they don’t seem to favour graduate salaries as a valid measure.

Happy new Academic Year. More soon on Research, Quality Assurance, Student debt, and the road ahead in Part II.

How not to measure teaching quality and learning gain

I blogged previously about learning gain just after the General Election.

At that point there was not much to go on but a hazy promise in the Conservative manifesto to “introduce a framework to recognise universities offering the highest teaching quality… and require more data to be openly available to potential students so that they can make decisions informed by the career paths of past graduates”.  On July 1st 2015, Jo Johnson added some more details; the teaching REF was intended to be outcomes focussed, in other words, not focussed on the kinds of ‘quality’ processes that universities had perfected over the last 20 years. The Conservative government hoped to devise a test of learning outcomes that, anticipating the skills of the best schemers in universities, would not be open to gaming.

In the Times Higher on 23rd July 2015, Julia King, Vice-Chancellor of Aston University, made this comment about the Teaching Excellence Framework (TEF):

First, it needs to measure the right things. It cannot be a superficial extension of the data provided through the Key Information Set, a variant on the Quality Assurance Agency’s higher education review or some rehash of the subject league tables that drive universities to offer higher and higher proportions of firsts and 2:1s.

Too right, and while I don’t agree with all her conclusions, I appreciate her drive to ensure that the TEF is based on “a properly evidenced measure of that quality”.

It was disappointing then, in the same week, to read a guest blog hosted on Wonkhe from Edward Peck, Vice-Chancellor of Nottingham Trent University. The piece started with a critique of the validity of current DLHE (Destinations of Leavers from Higher Education) data, and I became hopeful of a rebuttal of some of the cruder measures of graduate success. However, in the piece, entitled Finding New Ways to Measure Graduate Success, Peck explicitly rejects measures which, while not the sole determinants of teaching quality, might at least impact positively on the student experience. On the list are Staff Student Ratios and spend per student, which he regards as perverse, and rewarding inefficiency. As I read on, disenchantment grew:

“Secondly, the forthcoming availability of HMRC tax data to HESA and the Student Loans Company means that we could use a robust measure where we can select the census point at which we present data on average earnings by university and/or by course. This would not be dissimilar to the approach some rankings take to MBA programmes. With secondary education performance data also being brought into the mix, we have the hope of finding a much needed way to measure added value or learning gain”.

Now, I’m all in favour of looking at learning gain and allowing that to inform improvements to the student experience. I’m not in favour of releasing irrelevant and crude data in a league table. I’m still pondering the author’s logic when he presents graduate salary levels as an indicator of ‘value added’ by universities, and appears to see this as a proxy measure for learning gain. Universities are repositories and generators of research and knowledge, not factories for ‘salary men’. I would have hoped the nation’s vice-chancellors would have challenged those assumptions, not propounded them.

Here are five reasons why the equation of graduate salary, teaching quality and learning gain is unfounded.

  1. The glass floor effect. On July 26th The BBC news lead story was that middle class families are able to prevent their children from sliding down the social scale, regardless of talent. Clearly, then, graduate salaries are more likely to correlate with social class.
  1. The continued existence of a gender pay gap underlines how fanciful it is to assume that a high salary results from ‘value added’ higher education. Have only male graduates received better teaching? Data from the Fawcett Society indicates a pay gap of 19.1%. Admittedly, this figure encompasses pay for all workers, not just graduates, but points about the motherhood penalty, and outright discrimination are still valid, even in universities. Talk of perverse incentives! Would universities accept more white, able-bodied, middle class males onto courses, if they were likely to earn higher salaries?
  1. Economies are not stable. Middle class professionals continue to feel the force of austerity caused by bankers’ irresponsibility. ‘Generation rent’ may never attain the standard of living of their parents, but this has resulted from a failure of the generational compact, not the failure of universities.
  1. Similarly, we cannot predict which subject areas may prove lucrative. Recently, graduates of any discipline who work in finance have received higher salaries, while pay and employability for graduates in IT and bioscience have taken a downturn.
  1. The SBEE (Small Business, Enterprise and Employment Act 2015) has unleashed a huge data salad of graduate earnings, student loan repayment and course/ university attended. . At best, there may be an association between your earnings and your alma mater. This raises the question of construct validity – a notion drilled into all educators – make sure you have the appropriate measure in order to draw conclusions. Association of any set of measures does not indicate causality, so Peck’s suggested metrics are not even valid as proxy measures of learning gain.

Those of us who care about the future of higher education in the UK cannot let these lazy assumptions dominate the agenda of academic institutions. The data points may link up, courtesy of the SBEE, but the logic does not. It would be ironic if we allowed universities, of all places – interrogators of assumptions, busters of myths and challengers of fallacies – to be led by spurious metrics for what will probably turn out to be immeasurable.

teaching quality venn