Liz Morrish argues that the assumptions of performance management in higher education reside in the world of managerial fictions. It is a process riven with contradictions which require urgent rethinking.
Performance management has become a feature of the higher education landscape in the last decade. This definition is offered by Franco-Santos, Rivera and Bourne (2014), “At work, individuals are said to perform when they are able to achieve the objectives established by management. Organisations are thought to perform (or to be successful) when they satisfy the requirements of their stakeholders and are more effective and efficient than their competitors.” [my italics]
Recently, this document was circulated on Twitter. It is the assessment form required for performance review meetings at a non-aligned post 1992 university.
It is important, as a first step to framing resistance, to recognise the presuppositions and ideologies enshrined in this document. One of the major presuppositions that underlies the use of performance/performing is that there are standard ‘key indicators’ of performance which are invariable. It is presumed that management will decide what these indicators are, and that academic staff performance can be objectively measured using them.
The most striking omission from the checklist above is any recognition of normative assumptions about academic work: that it involves teaching and research; that it is absorbing; that it involves insight, imagination, networking, diligence; that it is rather indefinable in scope, and quite possibly not a good candidate for this type of one-size-fits-all assessment.
The scope of the job, we assume, is contained within the unexpanded categories of Quality of Work; Quantity and Output. The contradiction lies in the under-informing of the designation ‘satisfactory’. In a working environment where assessments are multiple (NSS, REF, QAA, peer teaching reviews etc.), and ever more searching and fine-grained, what are we to take from ‘satisfactory’? It seems to leave open a return to a default ‘unsatisfactory’ if perceived under-performance in one of the measures should become unbundled from the totality. The semantic instability of the evaluative adjective ‘satisfactory’ means that staff will never be ‘performing’ perfectly. It will always be possible to claim that there are ‘areas for improvement’, leaving the apraisee exposed to capricious revisioning. On the issue of Quality, there are also questions of whether a manager’s expertise can always extend to appraising quality of research, particularly when the parameters for exercising judgment are far from clear.
‘Quantity’, similarly, is unrevealing. The presupposition is that more is better, so how does this sit with likely institutional policies on work-life balance and stress management? And what measures are being used? Hours logged in the classroom? Or evenings spent answering students’ questions on email. Or thank-you cards sent by finalists? Are teaching and research both considered ‘outputs’? We note an ironic choice of language in ‘quantity’, especially given the non-judgmental use of ‘engagement’ that universities usually apply to student work.
Job knowledge might be an issue for new academics in post, and so seems superfluous beyond the probation review. Digging a little deeper, though, many of us observe that ‘procedures’ are shifting and transitory under conditions of volatility in higher education regulation. Given the secrecy and lack of consultation with which those changes are imposed, and the general regulatory ‘churn’, to require familiarity, is like asking appraisees to apprehend a mirage.
Perhaps the most pernicious trait desired in this forlorn framework is ‘attitude’. By what ‘benchmark’ are we being evaluated? I would counter that there are occasions where a negative attitude is beneficial to the academic community. Are we required to embrace ineffectual managers, or unenthusiastic students? Should we tolerate abusive phone calls from parents with equanimity? My attitude is my attitude, thanks. I don’t need your evaluation. In any case, how do you suggest remedying ‘attitude’ without seriously compromising my initiative (another category), or academic standards? Never underestimate the positive power of negative thinking, is my motto.
Appearance is yet another superficial and subjective judgement. Here are my prejudices: no sweat pants, no logo t-shirts, no leggings, no flip-flops. But if you ask the students, they are generally very welcoming of lecturers who mirror the informality of their ‘customers’. Possibly, though, this dispensation applies much more to male lecturers than to women. So what is being evaluated here, apart from some manager’s notion of gender-normed corporate dress code conformity?
Attendance and punctuality suggest a rigid and compliant personality, which is undermined by the desired qualities of ‘initiative’ and ‘flexibility’. The latter, particularly, is likely to escape actual validation. Our flexibility makes itself known to family as work bleeds into leisure and domestic life. The continual peeking at email, the agenda planning that invades a run, the feelings of guilt at taking a whole Sunday off during the marking season. It makes itself known as we submit to ridiculous marking turnaround times in order to satisfy intensifying demands for more feedback. It shortens our careers.
This is the world of university performance management at its most unthinking. Cheeringly, it is beginning to be challenged even from within. According to their recent report, the Leadership Foundation for Higher Education would find this appraisal ‘not fit for purpose’. The report distinguishes between stewardship and agency approaches to performance management, and urges universities to consider a more flexible application of these. Stewardship approaches “focus on long-term outcomes through people’s knowledge and values, autonomy and shared leadership within a high trust environment”. By contrast, “agency approaches focus on short-term results or outputs through greater monitoring and control”. I can probably guess which one seems more familiar to most academics, for whom autonomy, shared leadership and high trust working environments reside in the folklore of a previous generation.
However (to gleefully break a Govian rule), institutions with a mission that is focused on “long-term and highly complex goals, which are difficult or very costly to measure (e.g., research excellence, contribution to society)” are more likely to benefit from incorporating a stewardship approach to performance management. If we keep up the pressure, if we call out the pointless appeasement of what Power has described as ‘rituals of verification’ (1997:1), if we undermine the assumptions, then perhaps managers will start to align complexity of mission with a more delicate implementation of performance management.
References
Franco-Santos, M., Rivera, P. and Bourne, M. (2014), Performance Management in UK Higher Education Institutions, Leadership Foundation for Higher Education, London.
Power, Michael. 1997. The Audit Society: Rituals of Verification. Oxford: Oxford University Press.
Thanks for so interesting information. it was amazing.
LikeLike