“REF 2014 cost almost £250 million,” Times Higher Education recently reported. This is the first official figure for the exercise, taken from the REF Accountability Review: Costs, benefits and burden published by HEFCE on July 13. While this is much lower than Robert Bowman’s “guesstimate” of £1bn (which I personally believe to be based on more realistic costings of staff time),[1] it is still over four times HEFCE’s previously announced costs for RAE2008 of £47m for Higher Education Institutions [HEIs] and £12m for HEFCE. This increase should raise eyebrows, since the REF promised to reduce the costs of the RAE for HEIs. We are “as committed to lightening the burden as we are to rigour in assessing quality,” then HEFCE Chief Executive David Eastwood assured the sector back in November 2007.

It would be nice to know why the costs of submitting to the REF have risen so astronomically. We might also ask whether this huge increase in REF costs for HEIs has delivered remotely commensurate benefits.

How much more did REF2014 cost than RAE2008?

The REF Accountability Review calculates the total cost of REF2014 at £246m, of which £14m fell on the funding bodies (HEFCE and its counterparts for Scotland, Wales, and Northern Ireland) and £232m on HEIs. Around £19m of the latter (8%) was for REF panelists’ time—a figure I suggest is either a serious underestimate or the best indication we could have that the REF is not the “rigorous” process of research evaluation it purports to be.[2]  This leaves £212m (92%) as the cost of submission. The review accepts an earlier Rand Europe estimate of the cost of preparing impact submissions as £55m, or 26% of the £212m (p. 6). “All other costs incurred by HEIs” totaled £157m (p. 1).

Believing the £47m figure for RAE 2008 to be “a conservative estimate” (p. 17), the review revises it upward to £66m. Working with this new figure and discounting the cost of impact case studies (on grounds that they were not required in RAE2008) the review concludes: “the cost of submitting to the last RAE was roughly 43% of the cost of submitting to the REF” (p. 2). Or to put it another way, submission costs for RAE2014 were around 238% higher than for RAE2008 without taking into account the added costs of impact submissions.

The brief of the review was to consider “the costs, benefits and burden for HEIs of submitting to the Research Excellence Framework (REF)” (p. 4). While one can see the logic of excluding the cost of impact submissions for purposes of comparison the fact remains that HEIs did incur an additional £55m in real costs of preparing impact submissions, which were a mandatory element of the exercise. If impact is included in the calculation, as it should be, the increase in submission costs from RAE2008 to REF2014 rises to around 321%. In other words, the REF cost HEIs more than three times as much as the last RAE.

Why did REF2014 cost so much more than RAE2008?

In its comparison of the costs of the REF and the RAE the review lists a number of major changes introduced for REF2014 including reduction of the number of Units of Assessment [UOAs], revisions of definitions of A and C category staff, and introduction of an environment template (p. 14, figure 3). The 20 HEIs surveyed indicated that some of these had resulted in a decrease in costs while others were cost-neutral or entailed a “moderate increase” (less than 20%). Only two changes are claimed to have incurred a “substantial increase” (more than 20%) in costs.

“Interviewees and survey respondents suggested REF was more costly than RAE,” the review reports, “mainly because of the inclusion of the strand to evaluate the non-academic impact of research” (p. 12, my emphasis). Nearly 70% of respondents reported a substantial increase and another 20% a moderate increase in costs due to impact (p. 13). However, the REF Accountability Review‘s own figures show that this perception is incorrect. Impact only represented 26% of HEIs’ £212m submission costs. After subtracting £55m for impact (and discounting REF panelists’ time) the cost of preparing submissions still rose from £66m to £157m, i.e. by approximately £91m, or 238%, between RAE2008 and REF2014.

The review also singles out “the strengthening of equality and diversity measures, in relation to individual staff circumstances” as a factor that “increased the total cost of submission for most HEIs” (p. 12). This is the only place in the report where an item is identified as “a disproportionately costly element of the whole process” (pp. 2, 21, my emphasis). But if anything it is the attention devoted to this factor in the review that is disproportionate (and potentially worrying insofar as the review suggests “simplification” of procedures for dealing with special circumstances, p. 3).

While the work of treating disabled, sick, and pregnant employees equitably may have been “cumbersome” for HEI managers (p. 2), dealing with special circumstances “took an average 11% of the total central management time devoted to REF” and “consumed around 1% of the effort on average” at UOA level (p. 21). This amounts to £6m (or 4%) of HEIs’ £157m non-impact submission costs—a drop in the ocean.

We are left, then, with an increase in HEI submission costs between RAE2008 and REF2014 of around £85m that is not attributable to changes in HEFCE’s formal submission requirements.  To explain this increase we need to look elsewhere.

The review divides submission costs between central management costs and costs at UOA level. Of £46m central costs (excluding impact), £44m (or 96%) was staff costs, including the costs of REF management teams (56%) and time spent by senior academics on steering committees (18%). UOA-level costs were substantially greater (£111m). Of these, “87% … can be attributed to UOA review groups and academic champions and to submitted and not submitted academic staff” and £8m to support staff (p. 7). Staff time was thus overwhelmingly the most significant submission cost for HEIs.

When we look at how this time was spent, the review says “the REF element on research outputs, which included time spent reviewing and negotiating the selection of staff and publications” was “the main cost driver at both central management level and UOA level” (p. 2). At central management level this “was the most time-consuming part of the REF submission” (p. 18), with outputs taking up 40% of the time devoted to the REF. At UOA level 55% of the time devoted to REF—”the largest proportion … by a very substantial margin”—was “spent on reviewing and negotiating the selection of staff and publications” (p. 19). For HEIs as a whole, “the estimated time spent on the output element as a proportion of the total time spent on REF activities is 45% (excluding impact)” (p. 17).

The conclusion is inescapable. The principal reason for the increased cost of REF2014 over RAE2008 was NOT impact, and still less special circumstances, but the added time spent on selecting staff and outputs for submission.

Why did selecting staff and outputs cost so much more in REF2014?

The review obliquely acknowledges this in its recognition that larger and/or more research intensive HEIs devoted substantially more time to REF submission than others (p. 10) and that “several HEIs experienced submitting as particularly costly because preparing for the REF was organized as an iterative process. Some HEI’s ran two or three formal mock REFs, with the final [one] leading directly into the REF submission” (p. 27). For institutions that used them (which we can assume most research intensives did) mock REFs were “a significant cost” (p. 2).

But the review nowhere explains why this element should have consumed so much more staff time in REF2014 than it did in RAE2008. The most likely reason for the increase lies in a factor the review does not even mention: HEFCE’s changes to the QR funding formula (which controls how money allocated on the basis of the REF is distributed) in 2010-11, which defunded 2* outputs and increased the value of 4* outputs relative to 3* from 7:3 to 3:1. At that point, in the words of Adam Tickell, who was then pro-VC for research at the University of Birmingham, universities “had no rational reason to submit people who haven’t got at least one 3* piece of work.” More importantly, they had an incentive to eliminate every 2* (or lower) output from their submissions because 2* outputs would lower their ranking without any compensatory gain in income. Mock REF processes were designed for this purpose.

The single most significant driver of the threefold increase in costs between RAE2008 and REF2014 was not the introduction of impact or any other change to HEFCE’s rules for submission but competition between universities driven by HEFCE’s 2010-11 changes to the QR funding formula. The key issue here is less the amount of QR money HEIs receive from HEFCE than the prestige attached to ranking in the league tables derived from REF evaluations. A relatively small difference in an institution’s GPA can make a significant difference in its ranking.

The widespread gaming that has resulted has only served to further discredit the REF. Does anybody believe Cardiff University’s boast that it “has leapt to 5th in the Research Excellence Framework (REF) based on the quality of our research, a meteoric rise that confirms our place as a world-leading university” (my emphasis), when Cardiff actually achieved that rank by entering only 62% of eligible staff in its submission? This percentage is lower than all but one of the 28 British universities listed in the top 200 in the 2014-15 Times Higher Education World University Rankings (in which Cardiff is ranked a respectable but hardly “world-leading” 201st-225th, or 29= among the British institutions). As I have shown elsewhere, this is a systematic pattern that profoundly distorts REF results.

HEFCE would no doubt say that it does not produce or endorse league tables produced on the basis of the REF.  But to act as if it therefore had no responsibility for taking into account the consequences of such rankings is disingenuous in the extreme.

This competition between the research intensive universities is only likely to intensify given HEFCE’s further “tweaking” of the QR funding formula in February 2015, which changed the weighting of 3* to 4* outputs from 3:1 to 4:1.

Is the REF cost efficient?

The review defends the increase in costs between RAE2008 and REF2014 on the grounds that the total expenditure remains low relative to the amount of money that is allocated on its basis. We are reassured that REF costs amount to “less than 1%” of total public expenditure on research and “roughly 2.4% of the £102 billion in research funds expected to be distributed by the UK’s funding bodies” over the next six years (p.1). This is spin.

The ratio of REF costs to total expenditures on research funding is irrelevant since Research Councils (who distribute a larger portion of the overall public research funding budget) allocate grants to individuals and teams, not HEIs, through a competitive process that has nothing to do with the REF. The ratio of REF costs to QR funding allocated through HEFCE and the other funding bodies is more relevant but the figure given is inaccurate, because QR expenditures that are not allocated through the REF (e.g. the charity support fund and business research element) are included in the 2.4% calculation. Once these are excluded the figure rises to around 3.3%. By comparison, “the funding bodies estimated the costs of the 2008 RAE in England to be around 0.5% of the value of public research funding that was subsequently allocated with reference to its results” (p. 5). If this is supposed to be a measure of cost-efficiency, REF2014 scores very much worse than RAE2008.

“The cost and burden of the REF,” says the review, “should be the minimum possible to deliver a robust and defensible process.” “Changes new to REF 2014,” it adds, “have been adopted where it is judged they can bring demonstrable improvements which outweigh the cost of implementing them” (p. 5, my emphasis).

The review does attempt to make the case that the benefits of including impact in the REF exceeded the additional costs of measuring it, noting that “it yielded tremendous insight into each institution’s wider social and economic achievements and was widely welcomed as both a platform for marketing and internal learning” (pp. 2-3).[3]  Otherwise the major benefits claimed for the REF—reputational dividend, provision of information that can be used for performance management and forward planning, etc.—are exactly the same as those previously claimed for the RAE.

It is notable that the review nowhere attempts to justify the single most important factor in the additional costs of REF2014 as compared with RAE2008, which was hugely greater time spent on staff selection driven by competition between HEIs.

Many have argued that the human costs of this competition are inordinately high (the review confesses that it “does not include an estimate of non-time related burdens on staff, such as the stress on staff arising from whether they would be selected for the REF,” p. 4). What is clear from the REF Accountability Review is that there is an exorbitant financial cost as well—both in absolute terms and in comparison to the RAE.

When considering the cost-effectiveness of the exercise, we would do well to remember that the considerable sums of money currently devoted to paying academics to sit on committees to decide which of their colleagues should be excluded from the REF, in the interest of securing their university a marginal (and in many cases misleading) advantage in the league tables, could be spent in the classroom, the library, and the lab.  The QR funding formula has set up a classic prisoner’s dilemma, in which what may appear to be “rational” behavior for ambitious research-intensive HEIs has increasingly toxic consequences for the system as a whole.


[1] The review relies on figures for staff time spent on the REF provided by HEI research managers. It specifically asks managers to distinguish “between REF-related costs and normal (“business as usual”) quality assurance and quality management arrangements for research” (p. 11) and exclude the latter from their responses. I believe this distinction untenable insofar as UK HEIs’ university-level “arrangements” for “quality assurance and quality management” in research only take the forms they do because they have evolved under the RAE/REF regime. Where national research assessment regimes do not exist (like the US), central management and monitoring of research is not “normal” and “business as usual” looks very different. For example, it is far less common to find “research directors” at department level.

[2] The review’s figures of 934 academic assessors, each spending 533 hours (or 71 days) to assess 191,950 outputs, yield an average time of 2.59 hours available to read each output. From this we must deduct (1) time spent reading the 7000 impact case studies (avg. 7.5 per assessor) and all other elements of submissions (environment templates, etc.), and (2) time spent attending REF panel meetings. I would suggest this brings the time available for assessing each output down to under two hours. If we further assume that each output is read by a minimum of two assessors, individual assessors will spend, on average, under an hour reading each output. I would argue—especially in the case of large research monographs in the humanities and social sciences—that this is not enough to deliver the informed, “robust” assessment HEFCE claims (even assuming individual outputs are read by panelists with expertise in the specific area, which will often not be the case). Anyone reviewing a journal article submission, ms for a book publisher, or research grant application would expect to spend a good deal more time than this. In this context, the figure given in the review for the higher ratio of Research Council costs relative to the funds allocated on their basis (6%) may not indicate the superior cost efficiency of the REF, as the review implies (p. 1), so much as the relative lack of rigor of its evaluations of outputs.

[3] I have serious doubts as to the ways in which these perceived advantages (from the point of view of university managers) may come to drive research agendas at the expense of basic research, but that is not relevant to the present argument.

As a result of my posts on this blog last year relating to Britain’s Research Excellence Framework (see especially here and here), I was invited to write a short book to inaugurate the new “Sage Swifts” series.

Rank Hypocrisies: The Insult of the REF will be published on December 3, 2014: a couple weeks before HEFCE is due to publish the REF results.

But today’s announcement that HEFCE is actively “exploring the benefits and challenges of expanding … the Research Excellence Framework (REF), on an international basis” with a view to “an extension of the assessment to incorporate submissions from universities overseas” suggests some advance publicity might not be untimely.  For the REF should come with a health warning.

What I find most chilling in today’s HEFCE announcement is the bald assertion (in the accompanying survey) that “The UK’s research assessment system has a positive international reputation, built on a methodology developed over more than 20 years.”

Rank Hypocrisies shows on the contrary that the procedures used to evaluate outputs by Britain’s REF panels make a mockery of peer review as understood within the international academic community.  Among the issues discussed are the narrow disciplinary remit of REF panels and their inability to evaluate interdisciplinary research, the risks of replication of entrenched academic hierarchies and networks inherent in HEFCE’s procedures for appointment of panel members, the utterly unrealistic volume of work expected of panelists, the perversity of excluding all external indicators of quality from many assessments, and the lack of competence of REF panels to provide sufficient diversity and depth of expertise to evaluate the outputs that fall under their remit.

The REF is a system in which overburdened assessors assign vaguely defined grades in fields that are frequently not their own while (within many panels) ignoring all external indicators of the academic influence of the publications they are appraising, then shred all records of their deliberations.  That HEFCE should now be seeking to extend such a “methodology” beyond Britain’s shores is risible.


Derek Sayer’s book is essential reading for all university researchers and research policy makers. It discusses the waste, biases and pointlessness of Britain’s Research Excellence Framework (REF), and its misuse by universities. The book is highly readable, astute, sharply analytical and very intelligent. It paints a devastating portrait of a scheme that is useless for advancing research and that does no better job at ranking research performance than do the global indexes but does so for a huge cost in time, money, duplication, and irritation. Anyone interested in research ranking, assessment, and the contemporary condition of the universities should read this book.

Peter Murphy, Professor of Arts and Society, James Cook University

Rank Hypocrisies offers a compellingly convincing critique of the research auditing exercise to which university institutions have become subject. Derek Sayer lays bare the contradictions involved in the REF and provides a forensic analysis of the problems and inconsistencies inherent in the exercise as it is currently constituted. A must read for all university academic staff and the fast multiplying cadre of higher education managers and, in particular, government ministers and civil servants in the Department of Business Innovation and Skills.

Barry Smart, Professor of Sociology, University of Portsmouth

Academics across the world have come to see the REF – and its RAE predecessor – as an arrogant attempt to raise national research standards that has resulted in a variety of self-inflicted wounds to UK higher education. Derek Sayer is the Thucydides of this situation. A former head of the Lancaster history department, he fell on his sword trying to deal with a university that behaved in an increasingly irrational manner as it tried to game a system that is fundamentally corrupt in both its conception and execution. Rank Hypocrisies is more than a cri de coeur. It is the best documented diagnosis of a regime that has distorted the idea of peer review beyond recognition. Only someone with the clear normative focus of a former insider could have written this work. Thucydides would be proud.”

Steve Fuller, Auguste Comte Chair in Social Epistemology, Warwick University

Sayer makes a compelling argument that the Research Excellence Framework is not only expensive and divisive, but is also deeply flawed as an evaluation exercise. Rank Hypocrisies is a rigorous and scholarly evaluation of the REF, yet written in a lively and engaging style that makes it highly readable.

Dorothy Bishop, Professor of Developmental Neuropsychology and Wellcome Principal Research Fellow, University of Oxford

The REF is right out of Havel’s and Kundera’s Eastern Europe: a state-administered exercise to rank academic research like hotel chains – 2 star, 3 star – dependent on the active collaboration of the UK professoriate. In crystalline text steeped in cold rage, Sayer takes aim at the REF’s central claim, that it is a legitimate process of expert peer review. He provides a short history of the RAE/REF. He critiques university and national-level REF processes against actual practices of scholarly review as found in academic journals, university presses, and North American tenure procedures. His analysis is damning. If the REF fails as scholarly review, how can academics and universities continue to participate? And how can government use its rankings as a basis for public policy?

Tarak Barkawi, Reader in the Department of International Relations, London School of Economics

  More details of the book (which will be available in hardback and electronic formats) may be found here.

Professor Paolo Palladino, whose (so far unanswered) Open Letter to the Vice-Chancellor and management of Lancaster University following his exclusion from the 2014 REF was reported in my earlier post Kafkarna continues: REF gloves off at Lancaster University, has now written a long piece on the UCU RefWatch website on “Why the REF is bad for the very idea of the university.”  

This is how it ends:

“I have asked … for formal confirmation that I am not failing to meet any of my responsibilities as a member of the Department of History. I have also asked for confirmation that, in future years, the balance of my teaching, research and administration, as reflected in the workload allocation model, will not be outwith departmental norms, and that I will continue to benefit from the mechanisms within the Department of History and the Faculty of Arts and Social Sciences to support the engagement of individual staff in academic research and bids for external funding. No formal acknowledgment or response to the request has yet been received. I have spoken to my Head of Department about this and all that he could do was to smile knowingly about the absurdity of our predicament. In the meantime, my sense is that what will happen next, and, in some sense, this is already happening within the research councils and related charities, is that interdisciplinary research will be conflated evermore with multidisciplinary research, so that collaboration between academics in different disciplines will be regarded as delivering ‘interdisciplinary’ inquiry. There are far from insignificant costs to this semantic transformation because the individual scholar thus ceases to be the site of interdisciplinary inquiry and testing of the foundations upon which each discipline rests. Exercises such as REF are deceptive because what they reward is that which is familiar and conforms to the most widely shared expectations of what counts as knowledge, not that which challenges us to think deeply about who we are and what we do. In so doing, these exercises fail to live up to the very idea of the university and its one unflinching command to each one of us, to ‘dare to think’. I leave it to you to consider what might be the long-term implications of the failure to encourage such critical reflection among those students we are called upon to prepare for the challenge of creating a more just and more humane society.”

The full article can be accessed here.

I didn’t expect to be blogging again on this topic so soon, but …

We are still a week away from the date when universities have to upload their REF submissions to the HEFCE website, and Lancaster University seems already to be taking steps to “manage” the research of those staff members whose work it has deemed of insufficient quality to include in the submission—despite widespread criticism in the national educational press and elsewhere about the processes used to judge quality in “internal REF” exercises.

By way of background, all Lancaster University employees have for some years been required to undergo a formal annual “Professional and Development Review” (PDR), whose objectives the university defines as follows:

Throughout the year staff and their managers should be engaged in regular discussions about work, involving ongoing and constructive feedback and coaching.  The formal Performance and Development Review (PDR) meeting, held annually, is a focused discussion that draws together the threads of these conversations, provides the opportunity to reflect, clarify expectations and standards and agree current and future performance and development needs, leading to production of appropriate plans.

In most academic departments (unlike in other units) PDR reviewers include a number of senior faculty, because it is recognized that “it may not be feasible for the Head of Department alone to conduct all the reviews.”  In the History Department in recent years, PDR responsibilities have been divided among the professoriate, with some attempt being made to match areas of research interest wherever possible between the reviewers and the reviewed.

Apparently heads of department within the Faculty of Arts and Social Sciences (FASS) have now been told to take personal responsibility for the PDRs of all staff not returned in REF 2014, with the aim of ensuring that they will be returned in REF 2020.

Now this is curious.  Because earlier this summer, following discussions with the UCU, management posted the following statement on the university website:

Career progression of staff will not be affected and there will not be any contractual changes or instigation of formal performance management procedures solely on the basis of not being submitted for REF2014” (my emphasis).

I have pointed out before that this falls a long way short of an absolute guarantee that the evaluations of individuals’ research outputs arrived at through the “Mock REF” process would not be used to their detriment by the university in other contexts in the future.   I take no pleasure in being proved right.

Notwithstanding the university’s assurances, those excluded from the REF are now to be singled out as a group and treated differently from their colleagues solely as a result of that exclusion.  Unlike their colleagues, whose PDRs will continue to be of a more collegial kind, they will be subject for at least the next six years to formalized and ongoing annual surveillance by their line manager, in an objectives-defining and monitoring process whose overriding goal has become production of work that can be included in the 2020 REF.

These are undeniably “formal performance management procedures” in all but name, which clearly violate the spirit—if not, indeed, the letter—of the agreements arrived at with the UCU and publicly stated on the university’s website.  They also threaten to undermine a balance that has always been integral to the whole conception of the PDR at Lancaster, which is also set out on the HR website:

The context for discussing and agreeing individual objectives for the coming period will be the goals and targets of the department and the wider context of the faculty/divisional plan …  At the same time, many individuals will have longer-term personal goals and career aspirations.  These also will form part of the discussion and careful consideration should be given to the opportunities for matching these with department and university needs.

My fear is that with inclusion in the 2020 REF being defined in advance as the overriding goal, which is to be enforced by the head of department’s personal scrutiny of progress, the latter will be a dead letter.  Individuals’ career goals will no longer be reconciled with but sacrificed to the targets of the institution—targets that in the research arena have been redefined exclusively in REF terms.

This is an extremely ominous development, especially for colleagues engaged in interdisciplinary research—a group that has been disproportionately excluded from Lancaster’s History submission, whether deliberately or otherwise.  For it is likely the interdisciplinary element in their research that made their outputs so “difficult to assess,” “unconventional,” or “risky” in the Mock REF, leading to their eventual exclusion.  By the same logic, it will be exactly such “risks” that heads of department will be pressured and expected to work to minimize in the future.

I need hardly spell out the dangers this “performance management” poses to academic freedom, understood as the freedom of faculty members to decide what to research and how.   Some of my colleagues seem de facto to be losing it already.   It will also, as I have said before, make Lancaster far less distinctive as a research university, whose hallmarks in the past have included its commitment to and support of interdisciplinary scholarship.




Unless something really dramatic happens this will be my last post for a while on this topic.  I wanted, however, to write one last piece that pulled together the key arguments I have been making over the last few weeks with regard to Lancaster University’s procedures for selecting staff for its 2014 REF submission, and incorporated details that were not known to me at the time of my first long post on the issue, “The Kafkan World of the British REF.” An abundance of evidence is now surfacing in the press, UCU surveys, and elsewhere, to suggest that Lancaster has by no means been alone in playing fast and loose with HEFCE’s guidelines on transparency, accountability, and inclusiveness.  I hope the detailed critique that I have developed here will now contribute to a wider public debate about the legacy and lessons of what may prove to be the most contentious and divisive UK national research assessment exercise to date, and be of help to colleagues who are challenging similar injustices elsewhere.  


1   Talking the talk

On October 16, 2013—the same day, by coincidence, as I posted Getting nasty: Lancaster demands I censor my REF postsNature ran a piece on the REF and other national research audits prominently featuring Lancaster University.

Two years ago,” the article began, “academics at Lancaster University … found themselves in the uncomfortable position of being graded.  They each had to submit the four best pieces of research that they had published in the previous few years, and then wait for months as small panels of colleagues—each containing at least one person from outside the university—judged the quality of the work.

The idea of the drill,” said Professor Trevor McMillan, Lancaster’s Pro Vice-Chancellor (Research), was “‘to identify areas where we could help people develop their profiles.‘”  Help for those who “failed their evaluations” included “mentoring from a more experienced colleague, an early start on an upcoming sabbatical or a temporary break from teaching duties.

It would be churlish to deny that some of these things, including in the case of History provision of teaching relief, did occur, benefiting both individuals and the institution—though in two cases known to me colleagues were given support to help finish work only to be eventually excluded from the REF on the basis of internal judgments of “quality.”  But whatever its initial purpose, the main outcome of the Mock REF (or Internal REF, as it later officially became known) was quite other than identification of needs and provision of support.

Far from being merely a preparation for “the real thing,” as the Nature article implies, Lancaster’s Mock REF became the primary mechanism through which many individuals were excluded from it.   People involved—including myself, at that time as Head of the History Department—were not always fully aware at the outset of the extent to which the Mock REF would become, for those colleagues, the real REF.   We thought we were soliciting evaluations of outputs, many of which were still in draft form, in order to help them “develop their profiles,” not that those evaluations would become the starting point for a wholesale cull.

In my appeal against inclusion in the REF, I argued that because Professor McMillan’s “small panels of colleagues” de facto played an important part in staff selection for the REF, their membership and modus operandi should have been made public as required by HEFCE’s REF guidelines.

Responding to that appeal, my HoD, having consulted with the Faculty Associate Dean for Research, assured me that the role of these panels was “informational.”

Was Professor McMillan then perhaps misreported?

2  Walking the walk

Assuming he was not, just which colleagues sat on these “small panels,” and how did the panels work?

It is difficult to be certain of anything in the murk of Lancaster’s Mock REF, because the details that lie behind the bland generalities of the university’s HEFCE-approved Code of Practice have been walled within a fortress of “confidentiality” and the devil, as ever, lies in the details.  This may mean that—through no fault of mine—there may be occasional inaccuracies in what follows.  I welcome, and will happily publish, any corrections of fact the university makes.

I suspect that the composition and modus operandi of these panels differed considerably across departments and faculties.  Even within the Faculty of Arts and Social Sciences (FASS), rumor has it that some departments used procedures that resulted in more inclusive outcomes than History—consulting individuals over suitable external reviewers for their work, for example, or discarding obviously aberrant scores.  I have no difficulty with such differences per se, at least insofar as they genuinely reflect varying disciplinary expectations.  But Lancaster’s general lack of transparency over the membership, terms of reference, and operating criteria of these panels makes it impossible to ascertain whether colleagues were in fact treated equitably across the university or not.

As regards the History submission, from the university’s responses to my and others’ appeals—that is, those I have not been explicitly forbidden from quoting, summarizing, or paraphrasing—we can reasonably infer that the staff selection process in the “small panel” overseeing History’s REF submission looked something like this.[1]

Stage 1 Everybody’s outputs were read by one and the same “critical friend,” who was not a specialist in most of their fields of research.  If they passed that test with a GPA of 2.75 or above,[2] they were included in the REF submission without further review.   In other words: much of what the university regards as uncontentiously 3* or 4* in the History submission has been certified as such by a single reviewer, who is a non-specialist in many of the areas concerned.

Stage 2 Where the critical friend was unsure of how to grade an output, it was sent to a specialist reviewer external to the department, though not always to the university.  In many—if not in most or all—instances, it was deemed sufficient to approach only one specialist reviewer.  Outputs were also re-reviewed where the net result of the critical friend’s reading was to leave somebody with an aggregate score on or around the 2.75 borderline, even if the critical friend was confident of his own evaluations of the outputs concerned.

Whatever the circumstances that led to outputs being sent out for external review, the external reviewer’s score seems always to have replaced that of the critical friend—no matter how wide the discrepancy.  Options of averaging the scores or going to a third reviewer seem not to have been considered.

This raises an interesting conundrum.  If in every case where an output went out for specialist review the specialist’s score was preferred to that of the critical friend, does not this shake the University’s confidence in its reliance on the critical friend’s judgment in all other cases, irrespective of area?  Or to put it another way, if specialist review delivered what were considered to be more reliable judgments in these cases, why was it not routinely used for all outputs?

There were some further aspects of History’s use of specialist reviewers that deserve to be highlighted.  (1) The group that chose these reviewers was unrepresentative of the Department’s research, because it did not contain a single modern historian.[3]  (2) Nor did it include any historian whose work is interdisciplinary—an important consideration, given the cross-disciplinary background of some of the department’s members.  (3) It did not consult with those whose work was being assessed over appropriate reviewers, even when that work lay far outside its collective expertise.  (4) Finally, reviewers—even when drawn from other departments in the university—seem at times to have known the identity of authors.   In sharp contrast, not only did reviewers remain anonymous; their comments on individual outputs were also kept confidential.

Stage 3 If the result of Stage 2 was that your aggregate score dropped below 2.75 the University informed you that your research was not of sufficient quality to merit inclusion in the REF.   There is a statement on the university website that “Career progression of staff will not be affected and there will not be any contractual changes or instigation of formal performance management procedures solely on the basis of not being submitted for REF2014” (my emphasis), but this falls a long way short of an absolute guarantee that the evaluations of individuals’ research outputs arrived at through this Mock REF process could not be used to their detriment by the university in other contexts in the future.

These procedures are a caricature of peer review as understood internationally and throughout the profession, and as practiced, inter alia, by academic journals and presses, in competitions for research funding, and in tenure and promotion appraisals.  Peer review normally involves multiple assessments by specialist reviewers, and wherever possible employs double blind reviews in order to preserve authors’ as well as reviewers’ anonymity.  In research funding competitions the identity of external reviewers is kept anonymous, but their comments are usually made available to applicants (allowing challenge, for example, on grounds of bias or factual error).[4]  Likewise, the composition of assessment panels is generally made public—as it is in the case of the REF panels.  In tenure and promotion cases promises are usually made to reviewers “to keep your response confidential to the extent permitted by law,”[5] but candidates are protected against bias—or anomalous judgments—by the use of several reviewers, and a demand, in most cases, for highly detailed reports.

From the point of view of ensuring a multiplicity of evaluations from qualified assessors, Lancaster’s Internal REF procedures also compare unfavorably with its own protocols for promotions (multiple assessors, some of whom are nominated by the candidate) and the award of undergraduate and higher degrees (anonymity, internal moderation of scripts, use of external examiners as “referees” for Bachelor degrees; specialized external examiner for PhD).  As I observed before, it is scandalous that the university provides fewer safeguards for the academic reputations of its staff—for whom it has a statutory duty of care—than it does for the essays and examination scripts of those they teach.

3   The casualties

There was a right of appeal in Lancaster against exclusion from the REF, but only on grounds of procedure, discrimination, or new information.  Neither judgments nor scores could be challenged, on the entertaining ground that they were “subjective” (Code of Practice).  I have argued elsewhere that this appeal process failed to meet HEFCE’s requirement that “The individuals that handle appeals should be independent of the decisions about selecting staff” (HEFCE Assessment Framework, para 227).

It was also difficult to reconcile with HEFCE’s stipulation that “Appropriate and timely procedures should be put in place to inform staff who are not selected of the reasons behind the decision, and for appeals. Appeals procedures should allow members of staff to appeal after they have received this feedback, and for that appeal to be considered by the HEI before the final selection is made” (Assessment Framework, para 227).  Staff members were given just five days from notification of exclusion to lodge an appeal, and faculty Deans were still adjudicating appeals right up the REF census day of “staff eligible for selection,” which fell on October 31, 2013.   It is very hard to see how this timetable could realistically have allowed for reinstatement of excluded individuals in submissions, especially if their reinstatement would have meant a last-minute increase to the number of impact case studies required within a UoA.[6]

All three appeals known to me in History (other colleagues were certainly also excluded from the REF, but I don’t yet know how many or what proportion appealed) were turned down.   One of these appeals was that of the scholar who was excluded when his Past and Present paper was re-graded, whose case I discussed on October 5 under the title Update from Wonderland.  The conflict between the international professional expectations of peer review and the “right to manage” as management see fit was especially stark in this instance, given the regard with which P & P is held within the profession.

Commenting on the case, another blogger—a well-published and widely respected Professor of Medieval History—went to the core of what is at issue here:

Past and Present has one of the most rigorous refereeing systems in existence.  An article has to go through the hands of a lot of smart people before it is accepted and appears …  A volume accepted for a prestigious series like, say, CUP’s Studies in Medieval Life and Thought similarly has to get past a number of (more or less) critical readers.  There are obvious reasons why prestigious journals and series don’t publish rubbish, for their own sakes.  They don’t always get it right, for sure, but whatever comes out at least has the merit of scholarly acceptance at a high level.

As a result of that I always assumed that the REF would temper its own views according to this.  Thus, whatever one might think of a piece, the very fact of its appearance in, say, P&P would itself argue for a de facto level 3 at least of significance.  I still like to hope that this will be the case with the actual panels.  The grading of such an article at 2 strikes me as an example of monumental egotism—that this person’s view is so much better than that of the six leading scholars who read it and accepted it for publication in the country’s leading academic historical journal.  That level of egotism and arrogance disgusts me.” 

Professor Paolo Palladino, whose Open Letter on his exclusion from the REF I also reported earlier, declined to appeal on the grounds that to do so would only help legitimate a process that bore more resemblance to a Stalinist show trial—an exercise in humiliation—than a bona fide research evaluation.  What makes his case particularly piquant, however, is that in this instance the principles of peer review over which the University is riding roughshod are—its very own.

Professor Palladino was promoted to his present rank in 2010—that is to say, within the 2014 REF census period.  Some of the papers he submitted in his REF portfolio indeed formed part of his CV at the time of his promotion.  As with all promotions to Professor at Lancaster, his file was sent out to a minimum of six referees (chosen from eight, of which he could nominate two).

The University requires that in all such cases:

Referees should be eminent and independent and chosen with care: their departments should be of at least comparable standing with that of the candidate; they must be able to comment on the national or international reputation of the candidate; they will be normally be of Professorial (or equivalent) level and enjoy national or international standing within the candidate’s subject area.

The expectation is that normally all referees will be of international standing and able to comment on the international reputation of the candidate and that normally two will work outside the UK.

I suggest that this process is likely to have delivered a far more informed, as well as much more reliable, verdict on the quality of Professor Palladino’s research outputs than the “Internal REF” procedures described above.  Certainly it satisfied the university Chairs and Readerships Committee, which is not known for its laxity, back in 2010.  Yet for “strategic” reasons three years later the university chose to ignore what it had on file, choosing instead to malign Professor Palladino’s intellectual reputation and possibly jeopardize his career.

Each of these cases is individual.  It is difficult to ignore the fact, however, that what these scholars have in common with each other (and a third colleague in History, whose exclusion from the REF I cannot discuss here because it is still sub judice) is a commitment to interdisciplinary research and a sustained engagement with theory that is, shall we say, unusual in more traditional areas of the discipline of history.  Professor Palladino made the case for the inhospitality of the REF to interdisciplinary scholarship in his Open Letter, and I need not repeat his arguments here.

I do not believe that it is simple coincidence that interdisciplinary scholarship has been so disproportionately penalized by History’s evaluative procedures at Lancaster.  It is exactly the outcome I would expect such a process to deliver.

I would add, however, that in my view the threat to interdisciplinarity is but the most obvious symptom of what may turn out to be the supreme irony of the REF, which is its threat to any cutting-edge, blue skies, or innovative research because of the “risks” that will always be entailed in being “unconventional.”

Peter Scott put it very well this week in the Guardian:

“These days, universities’ main objective is to achieve better REF grades, not to produce excellent science and scholarship. This has become a subsidiary goal that only matters to the extent that it delivers top grades. Research is reduced to what counts for the REF. Four “outputs” over five years need to be submitted, so the temptation to recycle rather than create is very strong. Big ideas don’t come to annual order. Only the genius or the fool would dare to submit many fewer, even if their research office agreed. REF outputs also need to be ground-breaking, not world-shattering. To be too ahead of the curve of received wisdom is risky, especially outside the “objective” sciences—ideas that are too risky can be dismissed as silly. The REF is designed to pick present, not future winners.

The research assessment exercise started life as a simple measurement tool. It has grown into something completely different, a powerful and often perverse driver of academic behaviour. We need to get back to first principles, and design an assessment system that is at once simpler and more open.”


 4   Legacies—where next?

In the interview in Nature quoted earlier, Professor McMillan reassured readers that “[m]ost academics at Lancaster saw the mock REF as little more than a ‘mildly annoying bit of bureaucracy.'”  By the time the article appeared he would have known that some Lancaster academics had a different view of the matter.  Especially those who had “failed their evaluations.”

These issues, of course, matter well beyond Lancaster.  By now, the 2014 REF is being widely debated not only in the blogosphere and on social media but also in the Guardian and Times Higher Education.  My own appeal against inclusion in the REF was the subject of a recent article in THE.  Contrary to what Professor McMillan’s tone would lead us to expect, much of this discussion has been bitter, angry, and disappointed.  One commentator on Professor Scott’s article went so far as to compare universities’ REF gamesmanship with bankers fixing the LIBOR rate, and suggested that—this being public money—”some Vice-Chancellors need to be sacked (and ideally, put in jail) in order to establish the principle that this kind of systematic manipulation is not an acceptable response to a basically sensible performance measurement exercise.”

This is no longer a debate that can be swept under the carpet with assurances that all is for the best in the best of all possible worlds.  Departments like mine—and I have no doubt that there are many across disciplines and institutions up and down the land—have been left divided and demoralized, and the damage will last for years to come.  Nobody trusts anybody any more.  In pursuit of their short-term “strategic” objectives of higher rankings and more research income, university senior managers and their more or less enthusiastic underlings have squandered far more precious assets, wantonly jeopardizing individual careers and undermining the intellectual and moral foundations of academic life.

I believe HEFCE should conduct a full inquiry as to what went wrong in REF2014 and punish those responsible, but that is a matter for another post.

Asked by a reporter for Times Higher Education to comment on the arguments I have presented over the last few weeks on this blog, an (anonymous) spokeswoman for Lancaster University meantime continued to insist: “We are confident that we are making well-informed judgments as part of a careful decision-making process, which includes internal and external peer review.

Well, they shouldn’t be.  They really, really shouldn’t be.  Not if the university expects its “two central strategic goals—to establish Lancaster as a global university and to strengthen its national position, which are reflected in being ranked in the top 100 in the world and in the top 10 in the UK“—to be taken seriously anywhere in the academic world, whatever the outcome of REF 2014.


The emperor marched in the procession under the beautiful canopy, and all who saw him in the street and out of the windows exclaimed: “Indeed, the emperor’s new suit is incomparable! What a long train he has! How well it fits him!” Nobody wished to let others know he saw nothing, for then he would have been unfit for his office or too stupid. Never emperor’s clothes were more admired.

“But he has nothing on at all,” said a little child at last. “Good heavens! listen to the voice of an innocent child,” said the father, and one whispered to the other what the child had said. “But he has nothing on at all,” cried at last the whole people. That made a deep impression upon the emperor, for it seemed to him that they were right; but he thought to himself, “Now I must bear up to the end.” And the chamberlains walked with still greater dignity, as if they carried the train which did not exist. 

Hans Christian Andersen, “The Emperor’s New Suit” (1837)

[1] Used with the appellants’ knowledge and permission.

[2] As far as I know this figure has never been officially published.

[3] The group comprised the History Research Director (early modern historian), the Head of  Department (medieval historian), the Associate Dean (Research) (Religious Studies), and the Dean of the Faculty of Arts and Social Sciences (Linguistics).  The History Department currently has 24 full-time academic staff whose contractual duties include research.   Of these, 1 works on ancient history, 8 on medieval and/or early modern history, and 15 mainly or exclusively on modern history.

[4] I once successfully challenged a decision on a grant application to the Social Sciences and Humanities Research Council of Canada on this basis, where all four appraisers’ reports praised my proposal highly and recommended that it be fully funded but the committee scored it too low for funding.  SSHRC (for whom I have many times acted as a reviewer) introduced a new rule on the back of this case requiring that where panels reject the clear recommendations of a majority of assessors, they have to provide a written justification for doing so.

[5] I quote from the instructions I was given when doing a recent tenure/promotion review (as one assessor among several) for Harvard University.

[6] “Impact”—a new element in the 2014 REF by comparison with previous RAEs—accounts for 20% of every assessment.  Impact is measured according to case studies, and the number of case studies required is a function of the number of staff submitted.  Where a UoA sees itself as weak on impact there is an obvious temptation to reduce staff numbers submitted correspondingly.  Impact case studies have proved to be enormously time-consuming; to imagine that one could be produced out of a hat to accommodate a rise in UoA numbers as a result of successful appeals strains credulity.  Faculty Deans—the last court of appeal at Lancaster—are well aware of these structural pressures.

1 gag verb

: to put something (such as a piece of cloth) into or over a person’s mouth in order to prevent that person from speaking, calling for help, etc.

: to prevent (someone) from speaking freely or expressing opinions

Merriam-Webster On-line Dictionary


For those who read my earlier post Lancaster REF appeal: Castle closed, reasons confidential, I have disappointing news.  HR has now responded to my question “May I publish the Dean’s report on my case in full (with all names removed), which would be my preference?  If not, may I quote it?  Paraphrase or summarize its main arguments?  Refer to it at all?  I have always been in favor of presenting all sides of an argument.”

The answer was that I am permitted to report the fact that “my appeal has been rejected … but nothing more.”

2 gag noun

: something said or done to make people laugh

: something done as a playful trick

Meantime an anonymous spokeswoman for Lancaster University has reassured readers of Times Higher Education that: “We are confident that we are making well-informed judgments [on selection of staff for the 2014 REF] as part of a careful decision-making process, which includes internal and external peer review.”

3 gag verb

: to vomit or feel as if you are about to vomit : to feel as if what is in your stomach is going to come up into your mouth



Today—sent by password-encrypted FTP—I received the result of my appeal against inclusion in Lancaster University’s 2014 REF submission.  The original appeal, for those who have not seen it, is available here.

I had intended to post the report of the Dean of Another Faculty on this blog, together with my own comments.  But since both the letter I received from HR notifying me of the outcome of my appeal and the text of the review of my case are prominently marked Confidential, I do not believe I can do so without putting myself at risk of disciplinary proceedings or worse.

I have written to the University seeking clarification as to “how public I [may] make these documents, or their contents, if I so desire … May I publish the Dean’s report on my case in full (with all names removed), which would be my preference?  If not, may I quote it?  Paraphrase or summarize its main arguments?  Refer to it at all?”  I have always been in favor of presenting all sides of an argument.  If I am given permission to do so, I will publish the University’s last word on my appeal in due course. 

In the meantime, we seem to have reached the end of the road.  Here are some final idle musings on some of the anomalies of process I discovered along the way. 


According to HEFCE’s Assessment Framework and Guidance on Submissions for the 2014 REF, “Appropriate and timely procedures should be put in place to inform staff who are not selected of the reasons behind the decision, and for appeals.  Appeals procedures should allow members of staff to appeal after they have received this feedback, and for that appeal to be considered by the HEI before the final selection is made. The individuals that handle appeals should be independent of the decisions about selecting staff and should receive appropriate training” (para. 227, emphasis added).

At Lancaster we cannot appeal actual judgments, on the Kafkan grounds that they are “subjective.”  We may appeal only where there are “any perceived unfair discrimination, concerns about process (including if it is felt that procedure has not been followed) or circumstances where previously unavailable evidence has come to light.”   The Code of Practice then outlines a two-stage process.

In the first instance, “If a member if staff believes that they have appropriate grounds for a complaint they should initially discuss this with their Head of Department, following a request in writing laying out the nature of the concerns. A meeting should take place within 10 days of the request and the outcome followed up within 7 days of the meeting. The Head of Department may consult with the Associate Dean for Research in their faculty as part of their consideration of the appeal” (LU REF2014 Code of Practice).

First, it is by no means clear that the HoD has the power to overturn an initial decision to include/exclude an individual member of staff—a decision made, if we are to believe the Code of Practice, by “the Vice-Chancellor on the advice of the REF Steering Group,” that is to say, people far superior to her/him in the university hierarchy.  If there has been a single instance of an HoD overriding the VC’s inclusion/exclusion decisions, I would be delighted to hear of it.  Absent such evidence, it is very difficult not to conclude that this is not a proper appeal, understood as something that can actually change an outcome, at all.

Second, neither the HoD nor, in particular, the Associate Dean for Research, can reasonably be said to have been “independent of the original decisions about selecting staff” for the REF.  Not only have both been involved in the regular “phase meetings” that plotted individuals’ outputs and their evaluations from 2011 onward.  According to my HoD, “the basis on which external assessors [for individuals’ research outputs] were chosen was  … in consultation between the HoD and Research Director, and the Dean and Associate Dean for Research.”

The Associate Dean for Research is in addition ex officio a member of the REF Steering Group itself, whose Terms of Reference include: “To recommend to the Vice-Chancellor for final confirmation to which units of assessment Lancaster should submit, and the content of each unit’s submission, including the staff selected” (LU Code of Practice, Appendix 1).  If HEFCE’s principles of independence of appeal panels are to be adhered to, he is the last person with whom an HoD should be consulting in this context.

The second, and final, stage of appeal is this: “Any staff member remaining dissatisfied [with the HoD’s response], should submit formal written notification to the Director of Human Resources within 5 working days of receiving the decision of the original panel, requesting their case be reviewed by a Dean of another faculty” (LU Code of Practice).

Faculty Deans are not members of the REF Steering Group, hence they may be regarded as “independent” by those of a similar disposition to the courtiers in Hans Christian Andersen’s fable of the Emperor’s new clothes.  In practice, however—as I have repeatedly argued in earlier posts—Deans have been at the very center of preparation of REF submissions, including recommending “to which units of assessment Lancaster should submit, and the content of each unit’s submission, including the staff selected,” within their own faculties.  This is surely what common sense would expect of a Dean.  However, it is not consistent with a claim of “independence,” as HEFCE explicitly requires of those hearing appeals.

This becomes particularly important when, as in my own case, what is being appealed against are the procedures used by the University as a whole to select staff for inclusion in its 2014 REF submission.  Insofar as the University has a consistent policy (which HEFCE requires it to do), these are the same procedures Deans will be applying within their own faculties and­ to their own staff, albeit with some variation across disciplines.  Faculty Deans are therefore not in any sense disinterested parties.  It is difficult to conceive of any group within the university that has a greater collective stake in seeing such an appeal fail.

This is a fair appeals process in the same sense as what preceded it was a fair process for judging the academic quality of research outputs.  For me, this reductio ad absurdum is a fitting epitaph for the whole sorry Lancaster REF selection process.  Happy Halloween!

For update see here.

On October 15 I received an email from my Head of Department informing me that “Your appeal against REF inclusion has been discussed, and I have been mandated as HoD to send a response, which is attached.  The next stage, if you are not satisfied with the response, is to contact the Head of HR to ask for a review of the case to be heard by the Dean of another Faculty.

The HoD did not say by whom my appeal had been discussed, but his use of the bureaucratic passive suggests to me that the responsibility for the contents of the response is not his alone, even though it is delivered under his signature.  I was not surprised to learn that in the University’s view “There are … no grounds for upholding the appeal of Professor Sayer against REF inclusion.

The document was not marked confidential.  In the interests of transparency and fair play to all concerned, I am making both the HoD’s response and my reply (which will form part of the evidence submitted in the review of the case by the Dean of another Faculty) public.

These texts may not make the lightest of reading, but there is much in them to reward aficionados of Kafkan humour noir.   They might also entertain lovers of the old BBC TV series “Yes, Minister.”

I have already posted the text of my original appeal here.  The new documents may be found here:

Head of Department’s response to my appeal against inclusion in the REF

My reply to HoD’s response


The only alterations I have made to either document is to remove the names of individuals.


I have only one general observation to make at this stage.  In the course of rebutting my charge of inconsistent treatment of colleagues within History, my HoD describes the process eventually used to select individuals for the Lancaster’s REF submission as follows:

“all outputs would be read and evaluated by the critical friend, and … in given sets of circumstances further readings and evaluations by subject specialists may be commissioned, for example, where the critical friend had specifically recommended this on the grounds of his own uncertainty about an evaluation, or where an overall profile fell on a borderline. Since not all initial evaluations by the critical friend necessitated, in his view, further subject specialist evaluation, this option was not pursued in all cases.”

In other words, in all cases except those where the Critical Friend recommended it or aggregate scores fell on a borderline, there was no specialist quality appraisal for any outputs except those that happened to fall within that Critical Friend’s own field of academic expertise.  The latter would be a small minority, given the chronological, geographic, and thematic range of research published by members of the Department.

Does Lancaster really think such processes of “evaluating” academic research are consistent with a claim to be “a truly world leading university in which we perform at the leading edge of academic endeavour” (Lancaster University website)?

Update, October 16.  I have censored this post at the insistence of Professor Trevor McMillan, Pro Vice-Chancellor (research) at Lancaster University.  I indicate passages that have been altered or removed by angle brackets <>.

Since my last post on Lancaster University’s selection of History staff for inclusion in the 2014 REF, another case has come to light that is no less absurd–and troubling–than Professor Palladino’s.  After reading and evaluation of his four research outputs by the Department’s external assessor, another of my colleagues had an aggregate score of 2.75 (3 x 3 and 1 x 2).  He was nonetheless excluded from the REF as a result of one of his articles being re-read by a second external reader and given a lower grade of 2*.  The Dean of the Faculty of Arts and Social Sciences at that point refused to commission a further specialist review.

What is particularly troubling is that the article in question was published in Past and Present, which is regarded by most UK historians as one of the top historical journals, if not the leading historical journal in the English language.  My colleague had also been urged by the editor of a major specialist journal in his field to withdraw the article from Past and Present (who had not at that point definitively accepted it), with the promise that it would be published quickly as a lead article in this second journal, where it would form the centerpiece of a themed issue.  Once again the subjective judgments of University-appointed assessors have trumped the peer evaluations (in this case three) commissioned by distinguished professional journals.

My colleague is now appealing the decision, which is why I have kept his name anonymous (though I am posting this information with his permission).  But appeals are permitted only on procedural grounds—one cannot appeal the judgment of quality as such.  As I pointed out in an earlier post, the truly Kafkan reasoning behind this is that (to quote Lancaster University’s Code of Conduct for the 2014 REF): “The judgements are subjective, based on factual information. Hence, disagreement with the decision alone would not be appropriate grounds for an appeal.”

In the surreal spirit of the enterprise, but with the intent of questioning Lancaster University’s procedures for selecting History staff for the 2014 REF, I decided to submit a formal appeal of my own.  The appeal is against my being selected as part of the History Unit of Assessment (UoA) in the 2014 REF.

Here is the full text of my appeal.  The only change from the version sent to the Director of Human Resources at Lancaster is that I have removed names of individuals, except where I am quoting from already published material.




1.  I wish to appeal against Lancaster University’s decision to include me as part of the History UoA in its 2014 REF submission.

2.  This is not an appeal against the judgments of quality of my four outputs, on whose basis this decision was taken.  It therefore does not fall foul of the requirement that “disagreement with the decision alone would not be appropriate grounds for an appeal” (LANCASTER UNIVERSITY REF 2014: Code of Practice V5 27 September 2013, p. 5).[1]

3.  This appeal is based exclusively upon “concerns about process,” one of the two grounds permitted by LU Code of Practice (p. 5).   These concerns are:

(i) that the procedures used to select staff for inclusion in the Lancaster University History UoA for the 2014 REF do not satisfy the criteria set out by HEFCE in its document Assessment Framework and Guidance on Submissions[2] in respect of either transparency or accountability;

(ii) that the procedures used for assessing my own work for inclusion in the 2014 REF were incompatible with LU Code of Conduct‘s objective of ensuring that “The primary factor [in selection] will be the quality of the research outputs as defined by the published REF criteria contained in the Guidance on Submission and Panel Criteria documents” (p. 2); and

(iii) that the procedures used in assessing the quality of my own work for inclusion in the 2014 REF were de facto discriminatory toward several of my colleagues in History, who are not being returned in the REF, breaching HEFCE requirements of equality, fairness, and consistency.

I outline these concerns more fully under (4), (5) and (6) below, respectively.

4.  HEFCE’s Assessment Framework and Guidance makes clear that while “It is a requirement of the REF that each submitting institution establishes a code of practice on the selection of staff for REF submissions … It is the responsibility of HEIs to ensure that their codes of practice, and the manner in which they participate in the REF, are lawful” (39).

I am aware that LU Code of Practice was submitted to and approved by HEFCE.  I believe, however, that the actual implementation of this code, at least as regards the History UoA, has not conformed to HEFCE’s requirements.


204.  a. Transparency: All processes for the selection of staff for inclusion in REF submissions should be transparent. Codes of practice should be drawn up and made available in an easily accessible format and publicised to all academic staff across the institution, including on the staff intranet, and drawn to the attention of those absent from work. We would expect there to be a programme of communication activity to disseminate the code of practice and explain the processes related to selection of staff for submission. This should be documented in the code (Assessment Framework and Guidance, p. 39).

While the LU Code of Practice was made available on the University intranet, that document contained only the most general account of the “processes for the selection of staff for inclusion in the REF.”  It is my contention that the processes actually employed in the case of the History UoA were anything but transparent.

Specifically, History staff knew that their four outputs would be read by the Department’s external assessor Professor ___________ , and that his evaluation would in some (unspecified) way feed into the University’s final decisions on inclusion and exclusion.  Individuals were also informed that their work might be sent out for further, specialist readings.

But History staff were not told, at least until the final decision on inclusion was communicated to them by the HoD in September 2013:

(i) the scores Professor _______ had given their individual outputs;[3]

(ii) the circumstances that would trigger a second reading of their work, or a “re-review” by an independent specialist of an item already read and scored by Professor _______ ;

(iii) the basis on which external assessors (other than Professor _______ , <removed>) were chosen, or who was responsible for selecting them;

(iv) the overall aggregate score needed to qualify for inclusion in the University’s submission to the 2014 REF.

When I asked my HoD to tell me where information on the specific evaluative procedures used with regard to selection of staff for the History UoA had been published on the University website, I was told: “As far as I’m aware, the evaluative procedures for arriving at decisions have not been published on the website, and in any case they will differ from Faculty to Faculty and department to department” (email from [HoD] , 3 October 2013).

In short, several key elements of the evaluative procedures that determined whether or not individuals were included in the 2014 REF submission were not transparent, and were never clearly communicated to the staff concerned.

This in turn makes it difficult to appeal the University’s decisions: Lancaster has constructed a Kafkan scenario in which grounds for appeal include “concerns about process (including if it is felt that procedure has not been followed),” but one cannot know whether or not procedures have been followed if the procedures concerned have not been clearly communicated in advance.

204. c. Accountability: Responsibilities should be clearly defined, and individuals and bodies that are involved in selecting staff for REF submissions should be identified by name or role … Operating criteria and terms of reference for individuals, committees, advisory groups and any other bodies concerned with staff selection should be made readily available to all individuals and groups concerned (Assessment Framework and Guidance, p. 39, emphasis added).

The LU Code of Practice says that “selection decisions regarding the University submission to the REF will lie with the Vice-Chancellor on the advice of the REFSG,” whose membership the document details.   While I accept that this may be the formal legal position, it is not a complete or accurate description of what has actually happened.  I dispute that these are the only individuals or bodies at Lancaster University “involved in selecting staff for REF submissions.”  Others who have been involved, at various stages of the process, include Heads of Department, Departmental Research Directors, Associate Deans for Research, Faculty Deans, and external assessors.  Though none of these may be formally responsible for the final decisions on inclusion or exclusion, their inputs have contributed to those decisions.  In the case of external assessors who read and scored individual outputs, that contribution may often have been decisive.

When I was HoD for History (2009-2012) I sat on what were in effect ad hoc committees involved in the early stages of making recommendations—though not final decisions—for selection of staff for the 2014 REF for both History and the Department of European Languages and Cultures.  In the case of History, the relevant meetings involved, at various points, the Dean of FASS _______ , the Associate Dean Professor _______ , the History Department Research Director _______ , and the external assessor Professor _______ .  In the case of DELC, I was personally asked by the Dean, Professor _______ , to assess DELC staff members’ outputs as an “internal/external” reader.  The context was the need for FASS to take a strategic decision or whether or not DELC should be entered in the 2014 REF as an independent UoA.  So far as I recall, also present at that meeting were Dean _______ , [the FASS AD for Research], the DELC HoD _______ , the DELC Research Director _______ , and DELC’s external assessor, whose name I no longer recall.

I respectfully submit that (i) these were “bodies involved in selecting staff for REF submissions” in the sense intended by HEFCE paragraph 204c above, and (ii) their operating criteria and terms of reference were not “made readily available to all individuals and groups concerned.”

Specifically, these advisory groupings, whose recommendations will have formed the initial Department- and Faculty-level bases for the decisions on individual inclusion or exclusion by the central REF Steering Group and V-C, did not satisfy HEFCE’s requirements that:

209. Where a committee or committees have designated REF responsibilities – whether it is at departmental, faculty, UOA or central level – these should be detailed in the code of practice, including, for each committee:

• how the committee has been formed

• its membership

• the definition of its position within the advisory or decision-making process  …

210. The following details should be provided about its mode of operation:

• the criteria that it will use in carrying out its functions …

(Assessment Framework and Guidance, p. 40, emphasis added).

To my knowledge such bodies have no formal standing in the process at all, though their recommendations may well turn out to be decisive for individual members of staff.  They are likely to have been the most important decision-making bodies in all cases where department-level initial readings of outputs yielded a score sufficient for inclusion, a category into which my own case falls.  These bodies are not detailed anywhere in LU Code of Practice (as the Assessment Framework and Guidance para 209 requires), nor are their existence, composition, terms of reference, or operating criteria publicized on the intranet.

In short, the LU Code of Practice that was endorsed by HEFCE does not sufficiently describe the processes of staff selection for REF 2014 that have actually been used—at least with regard to the History UoA—at Lancaster University, while the processes that have actually been used to select individuals do not satisfy HEFCE’s stated criteria for either transparency or accountability.

5.  HEFCE’s Assessment Framework and Guidance is clear that: “The purpose of the guidance in Part 4 is to support institutions in promoting equality and diversity when preparing submissions to the REF, through drawing up and implementing a code of practice on the fair and transparent selection of staff. This will aid institutions in including all their eligible staff in submissions who are conducting excellent research, as well as promoting equality, complying with legislation and avoiding discrimination” (para. 187, p. 34, emphasis added).

I draw two inferences from this:

(i) HEFCE very clearly does not intend institutions to exclude any “eligible staff … who are conducting excellent research” from their submissions;

(ii) institutions’ procedures for deciding which staff are included in or excluded from submission in REF 2014 must be capable of judging whether outputs do or do not in fact constitute “excellent research,” as well as of satisfying HEFCE’s concerns regarding the separate issue of “promoting equality, complying with legislation and avoiding discrimination.”

Lancaster University likewise claims that: “”The primary factor [in selection of staff for the 2014 REF] will be the quality of the research outputs as defined by the published REF criteria contained in the Guidance on Submission and Panel Criteria documents” (LU Code of Conduct, p. 2).  Again, the logical expectation would be that its procedures for judging quality of outputs are fit for purpose.

I believe that the procedures applied within History at Lancaster University are manifestly not capable of judging the excellence or otherwise of my four outputs, and that my inclusion in the REF, without further specialist review of my work, is therefore contrary both to HEFCE Assessment Framework and Guidance and to the LU Code of Conduct as quoted above.

“Excellent research” has a very precise meaning within the discourse of REF 2014.  The word “excellent” is used only in connection with items ranked as 4* (“Quality that is world-leading in terms of originality, significance and rigour”) and 3* (“Quality that is internationally excellent in terms of originality, significance and rigour but which falls short of the highest standards of excellence”) (Assessment Framework and Guidance, Annex A, p. 43).  The specific criteria used to judge a work’s degree of excellence are originality, rigor, and significance.

On inquiring of my Head of Department whether my four outputs were read and evaluated by anybody other than the History UoA’s external assessor Professor _______ , I was informed that: “my understanding is that in your case all four outputs were evaluated only by [Professor _______] – I’m not aware of any being sent out for additional review” (email from [HoD], 3 October 2013).

I have the utmost respect for Professor _______ , who in my experience as HoD did his job as a Critical Friend with the utmost integrity and conscientiousness.  He is, however, a historian whose interests and expertise are very far from my own.  His website defines his research interests as <removed in order to preserve the reviewer’s anonymity>”[4]  I do not believe that he is remotely qualified to judge the excellence or otherwise of my research.  I write about none of the things in which he claims a research interest and has a record of publication.  I research twentieth-century history, not <removed>; Czech history, not <removed>; modernism and surrealism, with a particular focus on architecture and the visual arts, not <removed>.  My primary sources are all in Czech and French, languages Professor _______ does not read—as is much of the secondary discussion in the field.  I do not see how Professor _______ can be expected to evaluate the originality, rigor, and significance of my research if he has little or no knowledge of the fields to which it contributes.  Frankly, this is absurd!

Professor _______ is not qualified to evaluate the quality of my contributions to a field of historical inquiry so distant from his own, let alone make the necessarily fine distinctions that separate a 3* from a 2* output (the latter being defined as “Quality that is recognised internationally in terms of originality, significance and rigour,” Assessment Framework and Guidance, Annex A, p. 43).  He would not be asked to evaluate these items as an expert reviewer for a journal or publishing house, or in connection with tenure or promotion proceedings, not because he is not an eminent historian, but because his expertise lies in a very different area of the discipline.  His appointment as the single external assessor for the entire History UoA at Lancaster University is in contradiction with all normal professional norms of peer assessment and review.  Since Professor _______ is the only one to have read my work for REF purposes, I therefore submit that the procedures used by Lancaster University to evaluate that work, on the basis of which I am being submitting in REF 2014, are not consistent with HEFCE’s stated objective of ensuring that universities include “all their eligible staff in submissions who are conducting excellent research.”

After I learned I was to be included in the REF submission, I wrote my HoD with the following request: “In the interests of upholding the integrity of this process of quality evaluation, as well as of ensuring equitable treatment between colleagues, I would like formally to request that my outputs be sent out for further appraisal by subject-matter experts before I am submitted in the 2014 REF” (email to [HoD], 30 September 2013).  I received the following response: “I can see why you are asking for this, but I am afraid that it is no longer in my power to comply with your request for specialist reading of your work. The REF process is now at a stage where appeals on procedural grounds can be made, but not on any other grounds, and I have therefore been advised to decline your request” (email from [HoD], 2 October 2013).  Further communication with my HoD established that “Even before I opened your email of 30th Sept I had already been received messages from the Dean, AD Research and Pro VC for Research advising me that there was no longer any option for specialist reading of any work” (email of 3 October 2013).  That is to say, Lancaster University has closed the possibility of further specialist readings of work even before the deadline for making appeals (October 8) has expired.

It is worth noting, finally, that Lancaster University senior managers have admitted the inadequacy of the procedures that were eventually used to evaluate my work for inclusion in the REF.  Writing at an earlier stage of the process, the Associate Dean (Research) for the Faculty of Science and Technology accepted that “some weaknesses in the mock REF exercise are apparent, for example in many cases there was only one external reviewer per department, no doubt with expert knowledge but not in all the relevant areas” (Michael Koch, “In preparation for REF2014 – Mock REF and Units of Assessment,” SciTech Bulletin #125).[5]  This was exactly my case, and forms part of the basis of this appeal.  HEFCE expressed the hope that: “Institutions that conduct mock REF exercises might consider using them as an opportunity to apply their draft code and refine it further” (Assessment Framework and Guidance, para 203, p. 39).  Evidently in this case Lancaster, or at least FASS, chose to ignore that very sensible advice.

6.  HEFCE’s Assessment Framework and Guidance enshrines both a repeatedly stated commitment to “equality and diversity” and “fairness” (see e.g. para 187), and a specific requirement of:

204.  b. Consistency: It is essential that policy in respect of staff selection is consistent across the institution and that the code of practice is implemented uniformly (Assessment Framework and Guidance, p. 39).

I do not know how consistent staff selection procedures within History are with those used for other UoA’s, because as stated above, the procedures employed at individual UoA level have never been made public.  Within the History UoA, however, staff members have not been treated equally.  While some, like myself, have been included in the 2014 REF submission on the basis of one external reading of all outputs (i.e., Professor _______’s)—even when that reading has not been by an expert in the relevant field—others have had some or all items of their work additionally read by further external assessors.   Since these assessors are claimed to be field specialists, one might expect them to set the bar higher in terms of their ability to recognize originality, rigor, and significance.  In some cases this second reading has involved outputs that were originally given a grade by Professor _______ being downgraded, with (in at least two cases known to me) the staff member being excluded from the REF submission as a direct result.  I submit that this is inequitable insofar as some members of staff within the same UoA have been subjected to double or triple jeopardy and others not.

This might be justifiable, had clear criteria governing when outputs are sent out for further specialist evaluation been articulated and published in advance.  They were not.  From what I can glean from what I have been told by colleagues who have been excluded from the REF, the circumstances in which second readings have been sought include (but may not be limited to):

(i) where the external assessor, Professor _______ , himself expressed doubts as to his competence to evaluate an output in a field with which he is not familiar; and

(ii) where the aggregate score of all four outputs left the individual on a borderline in terms of the University’s (as yet unpublished) threshold for inclusion in the History UoA.

Since I fell into neither of these categories, assessment of my outputs was much less rigorous than that of several of my colleagues.  My outputs were read by only one external reviewer, as opposed to two or more, and that reviewer was not a subject matter specialist.

This is not consistent treatment of Lancaster University staff, nor is it by any reasonable standards fair to the colleagues in question.  Nor does it encourage representation of “diversity” of research within the Department, since the opinion of one individual, the external assessor, Professor _______ , plays a disproportionate role in determining who is or is not included in Lancaster University’s History UoA as submitted to the 2014 REF.

Derek Sayer

3 October 2013

[1] http://www.lancaster.ac.uk/depts/research/lancaster/REF2014.html, accessed 1 October 2013.  Subsequently cited as LU Code of Practice.

[2] Assessment framework and guidance on submissions (updated to include addendum published in January 2012), available at http://www.ref.ac.uk/media/ref/content/pub/assessmentframeworkandguidanceonsubmissions/GOS%20including%20addendum.pdf, accessed 1 October 2013.  Subsequently cited as Assessment Framework and Guidance.

[3] As then Head of Department, responsible for communicating to staff where they stood in terms of likelihood of being submitted in the 2014 REF in January 2012, I was repeatedly told by the History Research Director (who was acting, I assume, on instructions from above) that these scores should be kept confidential and not communicated to the staff involved.


Update, October 16.  I have censored this post at the insistence of Professor Trevor McMillan, Pro Vice-Chancellor (research) at Lancaster University.  I indicate passages that have been altered or removed by angle brackets <>.  

Times Higher Education recently reported that at Leicester University, “The position of all staff eligible for the [2014] REF but not submitted will be reviewed. Those who cannot demonstrate extenuating circumstances will have two options. Where a vacancy exists and they can demonstrate ‘teaching excellence’, they will be able to transfer to a teaching-only contract. Alternatively, they may continue on a teaching and research contract subject to meeting ‘realistic’ performance targets within a year.  If they fail to do so, ‘the normal consequence would be dismissal on the ground of unsatisfactory performance’” (08/08/2013, see full article here).   We live in interesting times.

1.   How it works—the context and the stakes

Having worked in North America from 1986 to 2006, when I took up a Chair in Cultural History at Lancaster University, I have spent most of my academic career in blissful ignorance of the peculiarly British institution that used to be called the RAE (Research Assessment Exercise) but has recently—and in good Orwellian fashion—been renamed the REF (Research Excellence Framework).   Many of my UK colleagues have known no academic life without such state surveillance.  But in most other countries, decisions on university funding are made without going through this time-consuming, expensive, and intellectually questionable audit of every university’s “research outputs” every five or six years.   Their research excellence has not conspicuously suffered in its absence.

The United States—whose universities currently occupy 7 of the top 10 slots in the THE World University Rankings—have no equivalent of the RAE/REF. This does not mean that research quality, whether of individuals or of schools and departments, is not evaluated.   It is continually evaluated, as anybody who has been through the tenure and promotion processes at a decent North American university, which are generally far more arduous (and rigorous) than their UK counterparts, will know.  But the relevant mechanisms of evaluation are those of the profession itself, not the state.  The most important of these are integrally bound up with peer-reviewed publication in top-drawer journals or (in the case of monographs) with leading university presses.  Venue of publication can be treated as an indicator of quality because of the rigor of the peer review processes of the top academic journals and publishers, and their correspondingly high rates of rejection.  Good journals typically use at least two reviewers (to counteract possible bias) per manuscript, who are experts in their fields, and the process of review is “double-blind”—i.e., the reviewer does not know the author’s identity and vice versa.  After an article or book has been published, citations and reviews provide further objective indicators of a work’s impact on an academic field.   The upshot is a virtuous (or, depending on your point of view, vicious) circle in which schools like CalTech, MIT, Princeton, or Harvard can attract the best researchers as evidenced mainly by their publication records, who will in turn bring further prestige and research income to those schools, maintaining their pre-eminence.

What immediately strikes anyone accustomed to the North American academy about the British RAE/REF is that at least in the humanities—which are my concern here—such quasi-objective indicators of research quality have been purposely ignored, in favor of entirely subjective evaluative procedures at every level.[1]  All those entered in the 2014 REF by their universities are required to submit four published “outputs” to a disciplinary sub-panel, whose members will then read and grade these outputs on a 4-point scale.   These scores account for 60 per cent of the overall ranking given to each “unit of assessment” (UoA), which will usually, but not always, be a university department.   The other 40 per cent comes from scores for “environment” (which includes PhD completions, external research income, conferences and symposia, marks of “esteem,” etc.) and “impact” (on the world at large, whose measurement has been the subject of considerable more or less entertaining debate), assigned by the same sub-panel.

Disciplinary sub-panels have some latitude in how they evaluate outputs, and in the natural sciences standing of journals and numbers of citations are likely to be seen as important indicators of quality.  The History Sub-panel has made it clear that it will not take into account venue of publication, citations, or reviews—so an article published in American Historical Review will be treated exactly the same as something posted on a personal website, as long as it is a published work.  The panel has also indicated that as a rule—and unlike with a typical book or journal article in the “real-world” peer review process that the panel has chosen to ignore—only one member of the panel will read each output.  While attempts will be made to find the best fit between the submitted outputs and the panel members’ personal scholarly expertise, in view of the size of the panel and the volume of material to be read this cannot always by any means be guaranteed.

In sum, in History at least—and I suspect across the humanities more generally—60 per cent of every UOA’s ranking will be dependent on the subjective opinion of just one panel member, who may very well not be an expert in the relevant field.   The criteria the panel intends to use for scoring outputs’ quality are (1) originality, (2) significance, and (3) rigor.  It is beyond me to see how competent judgments on these can be made by anyone who is not an expert in a field.   It is easy, on the other hand, to see why every university in the land was so desperate to get their people on REF committees.

For the stakes are high.  The aggregate REF score for each UOA determines (1) the ranking of that UoA relative to others in the same discipline across the country, and (2) the amount of research funding the university will receive for that UoA from the Higher Education Funding Councils until the next REF.   Both are critical for any school that has aspirations to be a research university, since—as all good sociologists know—what is defined as real is real in its consequences.

2.   What’s new in REF 2014—University-level assessment of outputs

In this context, it is important to note that—unlike in earlier RAEs—outputs scored at below 3* now attract no financial reward.  In previous RAEs it was in the interests of all universities to submit as high a proportion as possible of eligible staff with the intention of benefiting from a multiplier effect, and university websites boasted of the high proportions of their faculty that were “research active” as indexed by submission in successive RAEs.  In the REF, outputs scored at below 3* (defined as “Quality that is internationally excellent in terms of originality, significance and rigour but which falls short of the highest standards of excellence”) now dilute ranking relative to competitor institutions without any compensating gain in income.  The last thing any UoA wants is for the GPA that would otherwise be achieved by its 3* and 4* outputs to be reduced by a long “tail” of 2* and 1* valuations.

From the point of view of maximizing financial and reputational returns from the REF, the rational strategy for every research university is to exclude outputs ranked at 2* (“Quality that is recognised internationally in terms of originality, significance and rigour”) or below from entering a UoA’s submission, even if this means that fewer faculty are entered in the REF.  Sub-panels have also urged universities to be more selective in their submissions than in earlier RAEs, to ease their own workload.  All that material to read, often out of their fields of personal interest and expertise.  Why not sub-contract out the routine stuff?

Individual universities have responded differently to these pressures (and many, including Lancaster, have been understandably cagey about their plans).  But it now seems pretty clear that most, if not all schools with ambitions to be regarded as research universities are going to be far more selective than in previous RAEs in who they submit.  Lancaster warns on its internal staff website:

Lancaster University is aiming to maximise its ranking in each UoA submission so final decisions [on who is submitted] will be based on the most advantageous overall profile for the University. It is anticipated that the number of staff submitted to REF2014 will be below the 92% submitted for RAE2008 and not all contractually REF-eligible staff will be submitted.”

Rumor has it that the target figure for submission may in fact be as low as 65%., but that is, of course, only rumor.

Ironically, one of the reasons first advanced by the funding councils for shifting from the RAE to the REF format was “to reduce significantly the administrative burden on institutions in comparison to the RAE.”  The exact opposite has now happened.  Universities have had little alternative but to devise elaborate internal procedures for judging the quality of outputs before they are submitted for the REF, in order to screen out likely low-scoring items.  Many schools have had full-scale “mock-REF” exercises, in Lancaster’s case a full two years before the real thing, on the basis of which decisions about who will or will not be submitted in 2014 are being made.  The time and resources—which might otherwise have been spent actually doing research—that have been devoted to these perpetual internal evaluation exercises, needless to say, have been huge.

But more important, perhaps, is the fact that the whole character of the periodic UK research audit has significantly changed with the shift from the RAE to the REF, in ways that both jeopardize its (already dubious) claims to rigor and objectivity and could potentially seriously threaten the career prospects of individuals.  For the nationally uniform (at least within disciplines) and relatively transparent processes for evaluating outputs by REF Sub-panels are now supplemented by the highly divergent, frequently ad hoc, and generally anything but transparent internal procedures for prior vetting of outputs at the level of the individual university.   A second major irony of the REF is that if people at Leicester—or elsewhere—are fired or put on teaching-only contracts because they were not entered in the university’s submission, the assessments of quality upon which that decision was taken will have been arrived at entirely outside the REF system.

In previous RAEs all decisions on the quality of outputs were taken by national panels comprised of established scholars in the relevant discipline and constituted with some regard to considerations of representativeness and diversity, even if (as I have argued above) their evaluative procedures still left much to be desired.  In the REF, key decisions on quality of outputs are now taken within the individual university, with no significant external scrutiny, before these outputs can enter the national evaluative process at all.

3.  Down on the ground—enter Kafka

As I said earlier, most universities are playing their cards very close to their chests on what proportion of faculty they intend to enter into the REF and how they will be chosen.  I therefore cannot say how typical my own university’s procedures are.  I have no reason to think they are especially egregious in comparison to others’, but they still, in my view, leave considerably cause for concern.

The funding councils require every university to have an approved Code of Practice that ensures “transparency and fairness in the decision making process within the University over the selection of eligible staff for submission into the REF.”   Lancaster’s Code of Practice is published on the university website.  It is long on promises and short on detail, with all the wriggle-room one expects of such documents.  It says: “Decisions regarding the University submission to the REF will lie with the Vice-Chancellor on the advice of the REF Steering Group.  No other group will be formally involved in the selection of staff to be returned.”[2]  The Steering Group (whose composition is specified in an appendix) is tasked inter alia with adopting “open and transparent selection criteria,” ensuring “that selection for REF submissions do not discriminate on the grounds of age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex and sexual orientation,” and detailing “an appeal process that can be used by all members of eligible staff in order to seek further consideration for submission.”  All good stuff, though it is not discrimination on these grounds that most of my colleagues are worried about so much as the university’s ability to come up with procedures that will deliver informed and fair evaluations of their diverse work.  But what actually happens?

History department members were asked to identify four outputs for potential submission to the REF together with one or more “reserves.”  These items were then all read by a single external reader, who was originally hired by the Department in an advisory capacity as a “critical friend”—<removed>—but who has since been appointed by the university as an external assessor for REF submissions.   This assessor ranks every potential History UoA output on the four-point REF scale.  As far as I understand the procedure,[3] his ranking will be accepted by a Faculty-level Steering Group as definitive, unless he himself indicates that he does not feel competent to evaluate a specific output.  In that case—and in that case only—the output will be sent for a second reading by an independent specialist.   We are not told who chooses second readers or on what criteria; and colleagues have not been consulted on who might appropriately be approached for informed and objective assessments of their work.  All specialist readers remain anonymous.[4]

History is an enormously diverse discipline, in terms of chronology (at Lancaster we have historians of all periods from classical antiquity to the 20th century), geography (we have historians of Britain, several parts of Europe, the Middle East, India, South-East Asia, the Caribbean, and the United States), subject matter (economic, political, social, cultural, etc.), and methodology.  To expect any one historian, no matter how eminent, to be able to judge the quality of research across all these fields is absurd.   At least when outputs are submitted to REF Sub-panels there is a reasonable chance that the person who winds up grading them might have some expertise in or at least knowledge of the field.  The History Sub-panel has 24 members and a further 14 assessors.  Lancaster’s procedure, by contrast, guarantees that decisions as to whether individuals are submitted in the REF at all depend, in most cases, on the recommendation of the same individual, who is very likely not to be technically qualified to evaluate the quality of the work in question.  This is not a criticism of this particular assessor; no one individual could possibly be qualified to assess more than a minority of outputs, given the diversity of the historical research produced within the UoA.  Again, the relevant contrast is with peer reviewers for journals, who are chosen by the editors precisely for their specialist competence to assess a specific manuscript—not for their general eminence in the profession, or their experience of sitting on REF panels.[5]

I find it poignant that so cavalier an attitude toward evaluating the research of colleagues should be adopted in a university that requires external examiners for PhDs to be “an experienced member of another university qualified … to assess the thesis within its own field” and—unlike in North America—also requires all undergraduate work to be both second-marked internally and open to inspection by an external examiner before it can count toward a degree.   Why are those whose very livelihood depends on their research—and its reputation for quality—not given at least equivalent consideration as the students they teach?

A particular casualty of this approach is likely to be research that crosses disciplines—something, in other contexts, both Lancaster University and the Research Councils have been keen to pay lip-service to—or that is otherwise not mainstream (and as such may be cutting-edge).  Given the stakes in the REF, it always pays universities to be risk-averse.   Outputs may be graded below 3* not because their quality is in doubt but because they are thought to be marginal or unconventional with regard to the disciplinary norms of the REF Sub-panel, and what may be the most adventurous researchers excluded from the submission.

Though there is a right of appeal for individuals who feel they have been unfairly treated in terms of application of the procedure, Lancaster’s Code of Practice is crystal clear that: “The decision on the inclusion of staff to the REF is a strategic and qualitative process in which judgements are made about the quality of research of individual members of staff.  The judgements are subjective, based on factual information. Hence, disagreement with the decision alone would not be appropriate grounds for an appeal” (my emphasis).

This is the ultimate Kafkan twist.  The subjectivity of the process of evaluation is admitted, but only as a reason for denying any right of appeal against its decisions on substantive grounds.  These are exactly the grounds, of course, on which most people would want to appeal—not those of sexual or racial discrimination, though it might be argued that the secrecy (aka “confidentiality”) attached to these evaluations—we are not allowed to see external assessors’ comments—would make it impossible to prove discrimination in individual cases anyway.

4.   What comes next?

For whatever reasons, the funding councils for British universities have chosen to allocate that portion of their budget earmarked for research in ways that (at least in the humanities) systematically ignore the normal and well-established international benchmarks for judging the quality of research and publications.  Instead, they have chosen to set up their own panopticon of hand-picked disciplinary experts, whose eminence is not in doubt, but whose ability to provide informed assessments of the vast range of outputs submitted to them may well be questioned.  The vagaries of this approach have been exacerbated in the 2014 REF by putting in place a financial reward regime that incentivizes universities to exclude potential 2* and 1* outputs from submission altogether.  The resulting internal university REF procedures are not only enormously wasteful of time and money that could otherwise be spent on research.  More importantly, they compound the elements of subjectivity and arbitrariness already inherent in the RAE/REF system, ensuring that evaluations of quality on whose basis individuals are excluded from the REF are often not made by subject-matter experts.   Research that crosses disciplinary boundaries or challenges norms may be especially vulnerable in this context, because it is seen as especially “risky.”

Whatever this charade is, it is not a framework for research excellence.  If anything, it is likely to encourage conventional, “safe” research, while actively penalizing risk-taking innovation—above all, where that innovation crosses the disciplinary boundaries entrenched in the REF Sub-panels.  The REF is not even any longer a research assessment exercise, in any meaningful definition of that term, because so much of the assessment is now done within individual universities, in anything but rigorous ways, before it enters the REF proper at all.

Having made clear its intention to exclude from submission in the 2014 REF some of those faculty who would uncontentiously have qualified as “research-active” in previous RAEs, Lancaster University informs us that: “Career progression of staff will not be affected and there will not be any contractual changes or instigation of formal performance management procedures solely on the basis of not being submitted for REF2014.”  I hope the university means what it says.  But it is difficult to ignore the fact that Leicester University’s Pro-VC (as reported in THE) also reiterated that: “the university stands by its previously agreed ‘general principle’ that non-submission to the REF ‘will not, of itself, mean that there will be negative career repercussions for that person,’” even while spelling out his university’s intention to review the contractual positions of all non-submitted staff with a view to putting some on teaching-only contracts and firing others.

The emphasis on the weasel words is mine in both cases.  If universities truly intend that non-submission in the 2014 REF should not in any way negatively affect individuals’ career prospects, then what is to stop them saying so categorically and unambiguously?

[1] This is in part the result of a successful campaign by leading figures and organizations in humanities in the UK against “bibliometrics.”  While I accept that the expectations of journal ranking and citation patterns applicable to the natural sciences cannot simply be transferred wholesale to the humanities, I would also argue that an evaluative procedure that ignores all consideration of whether an output has gone through a prior process of peer review (and if so how rigorous), where it has been published, how it has been received, and how often and by whom it has been cited, throws the baby out with the bathwater.  It also gives an extraordinary intellectual gatekeeping power to those who constitute the REF disciplinary—in all senses—sub-panels, but that is an issue beyond the scope of this post.

[2] REF 2014 Code of Practice Lancaster V4 24 July 2013.  Quoted from LU website, accessed 9 August 2013.  My emphasis on formally.  So far as I can see, what this does is limit legal liability to the VC and a senior advisory committee, while immunizing those who are heavily involved in making the actual assessments of outputs, including, notably, departmental research directors and paid external assessors, against any potential litigation.

[3] I may be wrong on this point, but Faculty-level procedures have never been published—presumably because only the VC and university REF Steering Committee are formally involved in making decisions on who is submitted.

[4] This is the procedure for the History UoA.  Other UoAs at Lancaster may vary in points of detail, for instance in using internal as well as external readers.  I cannot discuss these differences, because these procedures have not been published either.  I doubt the variance would be such as to escape the general criticisms I am advancing here.

[5] Indeed, Lancaster’s Associate Dean (Research) for the Faculty of Science and Technology—where evaluating outputs is arguably easier than in the humanities anyway—has admitted that “some weaknesses in the mock REF exercise are apparent, for example in many cases there was only one external reviewer per department, no doubt with expert knowledge but not in all the relevant areas, who was engaged for a limited time only.”  Michael Koch, SciTech Bulletin #125.