Unless something really dramatic happens this will be my last post for a while on this topic. I wanted, however, to write one last piece that pulled together the key arguments I have been making over the last few weeks with regard to Lancaster University’s procedures for selecting staff for its 2014 REF submission, and incorporated details that were not known to me at the time of my first long post on the issue, “The Kafkan World of the British REF.” An abundance of evidence is now surfacing in the press, UCU surveys, and elsewhere, to suggest that Lancaster has by no means been alone in playing fast and loose with HEFCE’s guidelines on transparency, accountability, and inclusiveness. I hope the detailed critique that I have developed here will now contribute to a wider public debate about the legacy and lessons of what may prove to be the most contentious and divisive UK national research assessment exercise to date, and be of help to colleagues who are challenging similar injustices elsewhere.
*
1 Talking the talk
On October 16, 2013—the same day, by coincidence, as I posted Getting nasty: Lancaster demands I censor my REF posts—Nature ran a piece on the REF and other national research audits prominently featuring Lancaster University.
“Two years ago,” the article began, “academics at Lancaster University … found themselves in the uncomfortable position of being graded. They each had to submit the four best pieces of research that they had published in the previous few years, and then wait for months as small panels of colleagues—each containing at least one person from outside the university—judged the quality of the work.”
“The idea of the drill,” said Professor Trevor McMillan, Lancaster’s Pro Vice-Chancellor (Research), was “‘to identify areas where we could help people develop their profiles.‘” Help for those who “failed their evaluations” included “mentoring from a more experienced colleague, an early start on an upcoming sabbatical or a temporary break from teaching duties.”
It would be churlish to deny that some of these things, including in the case of History provision of teaching relief, did occur, benefiting both individuals and the institution—though in two cases known to me colleagues were given support to help finish work only to be eventually excluded from the REF on the basis of internal judgments of “quality.” But whatever its initial purpose, the main outcome of the Mock REF (or Internal REF, as it later officially became known) was quite other than identification of needs and provision of support.
Far from being merely a preparation for “the real thing,” as the Nature article implies, Lancaster’s Mock REF became the primary mechanism through which many individuals were excluded from it. People involved—including myself, at that time as Head of the History Department—were not always fully aware at the outset of the extent to which the Mock REF would become, for those colleagues, the real REF. We thought we were soliciting evaluations of outputs, many of which were still in draft form, in order to help them “develop their profiles,” not that those evaluations would become the starting point for a wholesale cull.
In my appeal against inclusion in the REF, I argued that because Professor McMillan’s “small panels of colleagues” de facto played an important part in staff selection for the REF, their membership and modus operandi should have been made public as required by HEFCE’s REF guidelines.
Responding to that appeal, my HoD, having consulted with the Faculty Associate Dean for Research, assured me that the role of these panels was “informational.”
Was Professor McMillan then perhaps misreported?
2 Walking the walk
Assuming he was not, just which colleagues sat on these “small panels,” and how did the panels work?
It is difficult to be certain of anything in the murk of Lancaster’s Mock REF, because the details that lie behind the bland generalities of the university’s HEFCE-approved Code of Practice have been walled within a fortress of “confidentiality” and the devil, as ever, lies in the details. This may mean that—through no fault of mine—there may be occasional inaccuracies in what follows. I welcome, and will happily publish, any corrections of fact the university makes.
I suspect that the composition and modus operandi of these panels differed considerably across departments and faculties. Even within the Faculty of Arts and Social Sciences (FASS), rumor has it that some departments used procedures that resulted in more inclusive outcomes than History—consulting individuals over suitable external reviewers for their work, for example, or discarding obviously aberrant scores. I have no difficulty with such differences per se, at least insofar as they genuinely reflect varying disciplinary expectations. But Lancaster’s general lack of transparency over the membership, terms of reference, and operating criteria of these panels makes it impossible to ascertain whether colleagues were in fact treated equitably across the university or not.
As regards the History submission, from the university’s responses to my and others’ appeals—that is, those I have not been explicitly forbidden from quoting, summarizing, or paraphrasing—we can reasonably infer that the staff selection process in the “small panel” overseeing History’s REF submission looked something like this.[1]
Stage 1 Everybody’s outputs were read by one and the same “critical friend,” who was not a specialist in most of their fields of research. If they passed that test with a GPA of 2.75 or above,[2] they were included in the REF submission without further review. In other words: much of what the university regards as uncontentiously 3* or 4* in the History submission has been certified as such by a single reviewer, who is a non-specialist in many of the areas concerned.
Stage 2 Where the critical friend was unsure of how to grade an output, it was sent to a specialist reviewer external to the department, though not always to the university. In many—if not in most or all—instances, it was deemed sufficient to approach only one specialist reviewer. Outputs were also re-reviewed where the net result of the critical friend’s reading was to leave somebody with an aggregate score on or around the 2.75 borderline, even if the critical friend was confident of his own evaluations of the outputs concerned.
Whatever the circumstances that led to outputs being sent out for external review, the external reviewer’s score seems always to have replaced that of the critical friend—no matter how wide the discrepancy. Options of averaging the scores or going to a third reviewer seem not to have been considered.
This raises an interesting conundrum. If in every case where an output went out for specialist review the specialist’s score was preferred to that of the critical friend, does not this shake the University’s confidence in its reliance on the critical friend’s judgment in all other cases, irrespective of area? Or to put it another way, if specialist review delivered what were considered to be more reliable judgments in these cases, why was it not routinely used for all outputs?
There were some further aspects of History’s use of specialist reviewers that deserve to be highlighted. (1) The group that chose these reviewers was unrepresentative of the Department’s research, because it did not contain a single modern historian.[3] (2) Nor did it include any historian whose work is interdisciplinary—an important consideration, given the cross-disciplinary background of some of the department’s members. (3) It did not consult with those whose work was being assessed over appropriate reviewers, even when that work lay far outside its collective expertise. (4) Finally, reviewers—even when drawn from other departments in the university—seem at times to have known the identity of authors. In sharp contrast, not only did reviewers remain anonymous; their comments on individual outputs were also kept confidential.
Stage 3 If the result of Stage 2 was that your aggregate score dropped below 2.75 the University informed you that your research was not of sufficient quality to merit inclusion in the REF. There is a statement on the university website that “Career progression of staff will not be affected and there will not be any contractual changes or instigation of formal performance management procedures solely on the basis of not being submitted for REF2014” (my emphasis), but this falls a long way short of an absolute guarantee that the evaluations of individuals’ research outputs arrived at through this Mock REF process could not be used to their detriment by the university in other contexts in the future.
These procedures are a caricature of peer review as understood internationally and throughout the profession, and as practiced, inter alia, by academic journals and presses, in competitions for research funding, and in tenure and promotion appraisals. Peer review normally involves multiple assessments by specialist reviewers, and wherever possible employs double blind reviews in order to preserve authors’ as well as reviewers’ anonymity. In research funding competitions the identity of external reviewers is kept anonymous, but their comments are usually made available to applicants (allowing challenge, for example, on grounds of bias or factual error).[4] Likewise, the composition of assessment panels is generally made public—as it is in the case of the REF panels. In tenure and promotion cases promises are usually made to reviewers “to keep your response confidential to the extent permitted by law,”[5] but candidates are protected against bias—or anomalous judgments—by the use of several reviewers, and a demand, in most cases, for highly detailed reports.
From the point of view of ensuring a multiplicity of evaluations from qualified assessors, Lancaster’s Internal REF procedures also compare unfavorably with its own protocols for promotions (multiple assessors, some of whom are nominated by the candidate) and the award of undergraduate and higher degrees (anonymity, internal moderation of scripts, use of external examiners as “referees” for Bachelor degrees; specialized external examiner for PhD). As I observed before, it is scandalous that the university provides fewer safeguards for the academic reputations of its staff—for whom it has a statutory duty of care—than it does for the essays and examination scripts of those they teach.
3 The casualties
There was a right of appeal in Lancaster against exclusion from the REF, but only on grounds of procedure, discrimination, or new information. Neither judgments nor scores could be challenged, on the entertaining ground that they were “subjective” (Code of Practice). I have argued elsewhere that this appeal process failed to meet HEFCE’s requirement that “The individuals that handle appeals should be independent of the decisions about selecting staff” (HEFCE Assessment Framework, para 227).
It was also difficult to reconcile with HEFCE’s stipulation that “Appropriate and timely procedures should be put in place to inform staff who are not selected of the reasons behind the decision, and for appeals. Appeals procedures should allow members of staff to appeal after they have received this feedback, and for that appeal to be considered by the HEI before the final selection is made” (Assessment Framework, para 227). Staff members were given just five days from notification of exclusion to lodge an appeal, and faculty Deans were still adjudicating appeals right up the REF census day of “staff eligible for selection,” which fell on October 31, 2013. It is very hard to see how this timetable could realistically have allowed for reinstatement of excluded individuals in submissions, especially if their reinstatement would have meant a last-minute increase to the number of impact case studies required within a UoA.[6]
All three appeals known to me in History (other colleagues were certainly also excluded from the REF, but I don’t yet know how many or what proportion appealed) were turned down. One of these appeals was that of the scholar who was excluded when his Past and Present paper was re-graded, whose case I discussed on October 5 under the title Update from Wonderland. The conflict between the international professional expectations of peer review and the “right to manage” as management see fit was especially stark in this instance, given the regard with which P & P is held within the profession.
Commenting on the case, another blogger—a well-published and widely respected Professor of Medieval History—went to the core of what is at issue here:
Past and Present has one of the most rigorous refereeing systems in existence. An article has to go through the hands of a lot of smart people before it is accepted and appears … A volume accepted for a prestigious series like, say, CUP’s Studies in Medieval Life and Thought similarly has to get past a number of (more or less) critical readers. There are obvious reasons why prestigious journals and series don’t publish rubbish, for their own sakes. They don’t always get it right, for sure, but whatever comes out at least has the merit of scholarly acceptance at a high level.
As a result of that I always assumed that the REF would temper its own views according to this. Thus, whatever one might think of a piece, the very fact of its appearance in, say, P&P would itself argue for a de facto level 3 at least of significance. I still like to hope that this will be the case with the actual panels. The grading of such an article at 2 strikes me as an example of monumental egotism—that this person’s view is so much better than that of the six leading scholars who read it and accepted it for publication in the country’s leading academic historical journal. That level of egotism and arrogance disgusts me.”
Professor Paolo Palladino, whose Open Letter on his exclusion from the REF I also reported earlier, declined to appeal on the grounds that to do so would only help legitimate a process that bore more resemblance to a Stalinist show trial—an exercise in humiliation—than a bona fide research evaluation. What makes his case particularly piquant, however, is that in this instance the principles of peer review over which the University is riding roughshod are—its very own.
Professor Palladino was promoted to his present rank in 2010—that is to say, within the 2014 REF census period. Some of the papers he submitted in his REF portfolio indeed formed part of his CV at the time of his promotion. As with all promotions to Professor at Lancaster, his file was sent out to a minimum of six referees (chosen from eight, of which he could nominate two).
The University requires that in all such cases:
Referees should be eminent and independent and chosen with care: their departments should be of at least comparable standing with that of the candidate; they must be able to comment on the national or international reputation of the candidate; they will be normally be of Professorial (or equivalent) level and enjoy national or international standing within the candidate’s subject area.
The expectation is that normally all referees will be of international standing and able to comment on the international reputation of the candidate and that normally two will work outside the UK.
I suggest that this process is likely to have delivered a far more informed, as well as much more reliable, verdict on the quality of Professor Palladino’s research outputs than the “Internal REF” procedures described above. Certainly it satisfied the university Chairs and Readerships Committee, which is not known for its laxity, back in 2010. Yet for “strategic” reasons three years later the university chose to ignore what it had on file, choosing instead to malign Professor Palladino’s intellectual reputation and possibly jeopardize his career.
Each of these cases is individual. It is difficult to ignore the fact, however, that what these scholars have in common with each other (and a third colleague in History, whose exclusion from the REF I cannot discuss here because it is still sub judice) is a commitment to interdisciplinary research and a sustained engagement with theory that is, shall we say, unusual in more traditional areas of the discipline of history. Professor Palladino made the case for the inhospitality of the REF to interdisciplinary scholarship in his Open Letter, and I need not repeat his arguments here.
I do not believe that it is simple coincidence that interdisciplinary scholarship has been so disproportionately penalized by History’s evaluative procedures at Lancaster. It is exactly the outcome I would expect such a process to deliver.
I would add, however, that in my view the threat to interdisciplinarity is but the most obvious symptom of what may turn out to be the supreme irony of the REF, which is its threat to any cutting-edge, blue skies, or innovative research because of the “risks” that will always be entailed in being “unconventional.”
Peter Scott put it very well this week in the Guardian:
“These days, universities’ main objective is to achieve better REF grades, not to produce excellent science and scholarship. This has become a subsidiary goal that only matters to the extent that it delivers top grades. Research is reduced to what counts for the REF. Four “outputs” over five years need to be submitted, so the temptation to recycle rather than create is very strong. Big ideas don’t come to annual order. Only the genius or the fool would dare to submit many fewer, even if their research office agreed. REF outputs also need to be ground-breaking, not world-shattering. To be too ahead of the curve of received wisdom is risky, especially outside the “objective” sciences—ideas that are too risky can be dismissed as silly. The REF is designed to pick present, not future winners.
The research assessment exercise started life as a simple measurement tool. It has grown into something completely different, a powerful and often perverse driver of academic behaviour. We need to get back to first principles, and design an assessment system that is at once simpler and more open.”
4 Legacies—where next?
In the interview in Nature quoted earlier, Professor McMillan reassured readers that “[m]ost academics at Lancaster saw the mock REF as little more than a ‘mildly annoying bit of bureaucracy.'” By the time the article appeared he would have known that some Lancaster academics had a different view of the matter. Especially those who had “failed their evaluations.”
These issues, of course, matter well beyond Lancaster. By now, the 2014 REF is being widely debated not only in the blogosphere and on social media but also in the Guardian and Times Higher Education. My own appeal against inclusion in the REF was the subject of a recent article in THE. Contrary to what Professor McMillan’s tone would lead us to expect, much of this discussion has been bitter, angry, and disappointed. One commentator on Professor Scott’s article went so far as to compare universities’ REF gamesmanship with bankers fixing the LIBOR rate, and suggested that—this being public money—”some Vice-Chancellors need to be sacked (and ideally, put in jail) in order to establish the principle that this kind of systematic manipulation is not an acceptable response to a basically sensible performance measurement exercise.”
This is no longer a debate that can be swept under the carpet with assurances that all is for the best in the best of all possible worlds. Departments like mine—and I have no doubt that there are many across disciplines and institutions up and down the land—have been left divided and demoralized, and the damage will last for years to come. Nobody trusts anybody any more. In pursuit of their short-term “strategic” objectives of higher rankings and more research income, university senior managers and their more or less enthusiastic underlings have squandered far more precious assets, wantonly jeopardizing individual careers and undermining the intellectual and moral foundations of academic life.
I believe HEFCE should conduct a full inquiry as to what went wrong in REF2014 and punish those responsible, but that is a matter for another post.
Asked by a reporter for Times Higher Education to comment on the arguments I have presented over the last few weeks on this blog, an (anonymous) spokeswoman for Lancaster University meantime continued to insist: “We are confident that we are making well-informed judgments as part of a careful decision-making process, which includes internal and external peer review.”
Well, they shouldn’t be. They really, really shouldn’t be. Not if the university expects its “two central strategic goals—to establish Lancaster as a global university and to strengthen its national position, which are reflected in being ranked in the top 100 in the world and in the top 10 in the UK“—to be taken seriously anywhere in the academic world, whatever the outcome of REF 2014.
*
The emperor marched in the procession under the beautiful canopy, and all who saw him in the street and out of the windows exclaimed: “Indeed, the emperor’s new suit is incomparable! What a long train he has! How well it fits him!” Nobody wished to let others know he saw nothing, for then he would have been unfit for his office or too stupid. Never emperor’s clothes were more admired.
“But he has nothing on at all,” said a little child at last. “Good heavens! listen to the voice of an innocent child,” said the father, and one whispered to the other what the child had said. “But he has nothing on at all,” cried at last the whole people. That made a deep impression upon the emperor, for it seemed to him that they were right; but he thought to himself, “Now I must bear up to the end.” And the chamberlains walked with still greater dignity, as if they carried the train which did not exist.
Hans Christian Andersen, “The Emperor’s New Suit” (1837)
[1] Used with the appellants’ knowledge and permission.
[2] As far as I know this figure has never been officially published.
[3] The group comprised the History Research Director (early modern historian), the Head of Department (medieval historian), the Associate Dean (Research) (Religious Studies), and the Dean of the Faculty of Arts and Social Sciences (Linguistics). The History Department currently has 24 full-time academic staff whose contractual duties include research. Of these, 1 works on ancient history, 8 on medieval and/or early modern history, and 15 mainly or exclusively on modern history.
[4] I once successfully challenged a decision on a grant application to the Social Sciences and Humanities Research Council of Canada on this basis, where all four appraisers’ reports praised my proposal highly and recommended that it be fully funded but the committee scored it too low for funding. SSHRC (for whom I have many times acted as a reviewer) introduced a new rule on the back of this case requiring that where panels reject the clear recommendations of a majority of assessors, they have to provide a written justification for doing so.
[5] I quote from the instructions I was given when doing a recent tenure/promotion review (as one assessor among several) for Harvard University.
[6] “Impact”—a new element in the 2014 REF by comparison with previous RAEs—accounts for 20% of every assessment. Impact is measured according to case studies, and the number of case studies required is a function of the number of staff submitted. Where a UoA sees itself as weak on impact there is an obvious temptation to reduce staff numbers submitted correspondingly. Impact case studies have proved to be enormously time-consuming; to imagine that one could be produced out of a hat to accommodate a rise in UoA numbers as a result of successful appeals strains credulity. Faculty Deans—the last court of appeal at Lancaster—are well aware of these structural pressures.
Very creative postt