By chance, websurfing for something completely different, I came across this today:

In contrast Rank Hypocrisies: The Insult of the REF by Derek Sayer was fantastic. It’s a blistering indictment of the lunacy of REF and persuaded me of a position I’d been slowly, up till now reluctantly, moving towards: metrics are obviously the lesser of two evils. They’re far from perfect (to say the least) but they would be a huge improvement on REF2014. He makes the case convincingly that the ‘peer review’ of the REF falls dramatically short of accepted standards of peer review. Far too few people are asked to review far too much. They also frequently have little to no specialist knowledge about the work they’re ‘reviewing’. He’s particularly interesting on the politics of the ‘internal REFs’ that have been conducted and paints a vivid picture of the vast REF bureaucracy being reduplicated within each university itself. He argues that this is an important tool for the disciplining of academic labour, extends the power of managers and the exercise as a whole (‘modernization’ of higher education) entrenches a small elite within the sector. To use the memorable phrase offered by Will Davies, which I’ve had stuck in my head for ages now, the whole thing is an exercise in heating up the floor to see who can keep hopping the longest.

Mark Carrigan, “Things I’ve been reading recently #2,” posted on his blog on February 17, 2015

This is a very useful summary of recent debates on twitter and elsewhere on the UK’s RAE-REF research assessment exercises.  I’m reposting it for information.  Thank you Ian Pace!

Desiring Progress

I am writing this piece at what looks like the final phase of the USS strike involving academics from pre-1992 UK universities. A good deal of solidarity has been generated through the course of the dispute, with many academics manning picket lines together discoverying common purpose and shared issues, and often noting how the structures and even physical spaces of modern higher education discourage such interactions when working. Furthermore, many of us have interacted regularly using Twitter, enabling the sharing of experiences, perspectives, vital data (not least concerning the assumptions and calculations employed for the USS future pensions model), and much else about modern academic life. As noted by George Letsas in the Times Higher Education Supplement (THES), Becky Gardiner in The Guardian, Nicole Kobie in Wired, and various others, the strike and other associated industrial action have embodied a wider range of frustrations amongst UK-based…

View original post 9,810 more words

“REF 2014 cost almost £250 million,” Times Higher Education recently reported. This is the first official figure for the exercise, taken from the REF Accountability Review: Costs, benefits and burden published by HEFCE on July 13. While this is much lower than Robert Bowman’s “guesstimate” of £1bn (which I personally believe to be based on more realistic costings of staff time),[1] it is still over four times HEFCE’s previously announced costs for RAE2008 of £47m for Higher Education Institutions [HEIs] and £12m for HEFCE. This increase should raise eyebrows, since the REF promised to reduce the costs of the RAE for HEIs. We are “as committed to lightening the burden as we are to rigour in assessing quality,” then HEFCE Chief Executive David Eastwood assured the sector back in November 2007.

It would be nice to know why the costs of submitting to the REF have risen so astronomically. We might also ask whether this huge increase in REF costs for HEIs has delivered remotely commensurate benefits.

How much more did REF2014 cost than RAE2008?

The REF Accountability Review calculates the total cost of REF2014 at £246m, of which £14m fell on the funding bodies (HEFCE and its counterparts for Scotland, Wales, and Northern Ireland) and £232m on HEIs. Around £19m of the latter (8%) was for REF panelists’ time—a figure I suggest is either a serious underestimate or the best indication we could have that the REF is not the “rigorous” process of research evaluation it purports to be.[2]  This leaves £212m (92%) as the cost of submission. The review accepts an earlier Rand Europe estimate of the cost of preparing impact submissions as £55m, or 26% of the £212m (p. 6). “All other costs incurred by HEIs” totaled £157m (p. 1).

Believing the £47m figure for RAE 2008 to be “a conservative estimate” (p. 17), the review revises it upward to £66m. Working with this new figure and discounting the cost of impact case studies (on grounds that they were not required in RAE2008) the review concludes: “the cost of submitting to the last RAE was roughly 43% of the cost of submitting to the REF” (p. 2). Or to put it another way, submission costs for RAE2014 were around 238% higher than for RAE2008 without taking into account the added costs of impact submissions.

The brief of the review was to consider “the costs, benefits and burden for HEIs of submitting to the Research Excellence Framework (REF)” (p. 4). While one can see the logic of excluding the cost of impact submissions for purposes of comparison the fact remains that HEIs did incur an additional £55m in real costs of preparing impact submissions, which were a mandatory element of the exercise. If impact is included in the calculation, as it should be, the increase in submission costs from RAE2008 to REF2014 rises to around 321%. In other words, the REF cost HEIs more than three times as much as the last RAE.

Why did REF2014 cost so much more than RAE2008?

In its comparison of the costs of the REF and the RAE the review lists a number of major changes introduced for REF2014 including reduction of the number of Units of Assessment [UOAs], revisions of definitions of A and C category staff, and introduction of an environment template (p. 14, figure 3). The 20 HEIs surveyed indicated that some of these had resulted in a decrease in costs while others were cost-neutral or entailed a “moderate increase” (less than 20%). Only two changes are claimed to have incurred a “substantial increase” (more than 20%) in costs.

“Interviewees and survey respondents suggested REF was more costly than RAE,” the review reports, “mainly because of the inclusion of the strand to evaluate the non-academic impact of research” (p. 12, my emphasis). Nearly 70% of respondents reported a substantial increase and another 20% a moderate increase in costs due to impact (p. 13). However, the REF Accountability Review‘s own figures show that this perception is incorrect. Impact only represented 26% of HEIs’ £212m submission costs. After subtracting £55m for impact (and discounting REF panelists’ time) the cost of preparing submissions still rose from £66m to £157m, i.e. by approximately £91m, or 238%, between RAE2008 and REF2014.

The review also singles out “the strengthening of equality and diversity measures, in relation to individual staff circumstances” as a factor that “increased the total cost of submission for most HEIs” (p. 12). This is the only place in the report where an item is identified as “a disproportionately costly element of the whole process” (pp. 2, 21, my emphasis). But if anything it is the attention devoted to this factor in the review that is disproportionate (and potentially worrying insofar as the review suggests “simplification” of procedures for dealing with special circumstances, p. 3).

While the work of treating disabled, sick, and pregnant employees equitably may have been “cumbersome” for HEI managers (p. 2), dealing with special circumstances “took an average 11% of the total central management time devoted to REF” and “consumed around 1% of the effort on average” at UOA level (p. 21). This amounts to £6m (or 4%) of HEIs’ £157m non-impact submission costs—a drop in the ocean.

We are left, then, with an increase in HEI submission costs between RAE2008 and REF2014 of around £85m that is not attributable to changes in HEFCE’s formal submission requirements.  To explain this increase we need to look elsewhere.

The review divides submission costs between central management costs and costs at UOA level. Of £46m central costs (excluding impact), £44m (or 96%) was staff costs, including the costs of REF management teams (56%) and time spent by senior academics on steering committees (18%). UOA-level costs were substantially greater (£111m). Of these, “87% … can be attributed to UOA review groups and academic champions and to submitted and not submitted academic staff” and £8m to support staff (p. 7). Staff time was thus overwhelmingly the most significant submission cost for HEIs.

When we look at how this time was spent, the review says “the REF element on research outputs, which included time spent reviewing and negotiating the selection of staff and publications” was “the main cost driver at both central management level and UOA level” (p. 2). At central management level this “was the most time-consuming part of the REF submission” (p. 18), with outputs taking up 40% of the time devoted to the REF. At UOA level 55% of the time devoted to REF—”the largest proportion … by a very substantial margin”—was “spent on reviewing and negotiating the selection of staff and publications” (p. 19). For HEIs as a whole, “the estimated time spent on the output element as a proportion of the total time spent on REF activities is 45% (excluding impact)” (p. 17).

The conclusion is inescapable. The principal reason for the increased cost of REF2014 over RAE2008 was NOT impact, and still less special circumstances, but the added time spent on selecting staff and outputs for submission.

Why did selecting staff and outputs cost so much more in REF2014?

The review obliquely acknowledges this in its recognition that larger and/or more research intensive HEIs devoted substantially more time to REF submission than others (p. 10) and that “several HEIs experienced submitting as particularly costly because preparing for the REF was organized as an iterative process. Some HEI’s ran two or three formal mock REFs, with the final [one] leading directly into the REF submission” (p. 27). For institutions that used them (which we can assume most research intensives did) mock REFs were “a significant cost” (p. 2).

But the review nowhere explains why this element should have consumed so much more staff time in REF2014 than it did in RAE2008. The most likely reason for the increase lies in a factor the review does not even mention: HEFCE’s changes to the QR funding formula (which controls how money allocated on the basis of the REF is distributed) in 2010-11, which defunded 2* outputs and increased the value of 4* outputs relative to 3* from 7:3 to 3:1. At that point, in the words of Adam Tickell, who was then pro-VC for research at the University of Birmingham, universities “had no rational reason to submit people who haven’t got at least one 3* piece of work.” More importantly, they had an incentive to eliminate every 2* (or lower) output from their submissions because 2* outputs would lower their ranking without any compensatory gain in income. Mock REF processes were designed for this purpose.

The single most significant driver of the threefold increase in costs between RAE2008 and REF2014 was not the introduction of impact or any other change to HEFCE’s rules for submission but competition between universities driven by HEFCE’s 2010-11 changes to the QR funding formula. The key issue here is less the amount of QR money HEIs receive from HEFCE than the prestige attached to ranking in the league tables derived from REF evaluations. A relatively small difference in an institution’s GPA can make a significant difference in its ranking.

The widespread gaming that has resulted has only served to further discredit the REF. Does anybody believe Cardiff University’s boast that it “has leapt to 5th in the Research Excellence Framework (REF) based on the quality of our research, a meteoric rise that confirms our place as a world-leading university” (my emphasis), when Cardiff actually achieved that rank by entering only 62% of eligible staff in its submission? This percentage is lower than all but one of the 28 British universities listed in the top 200 in the 2014-15 Times Higher Education World University Rankings (in which Cardiff is ranked a respectable but hardly “world-leading” 201st-225th, or 29= among the British institutions). As I have shown elsewhere, this is a systematic pattern that profoundly distorts REF results.

HEFCE would no doubt say that it does not produce or endorse league tables produced on the basis of the REF.  But to act as if it therefore had no responsibility for taking into account the consequences of such rankings is disingenuous in the extreme.

This competition between the research intensive universities is only likely to intensify given HEFCE’s further “tweaking” of the QR funding formula in February 2015, which changed the weighting of 3* to 4* outputs from 3:1 to 4:1.

Is the REF cost efficient?

The review defends the increase in costs between RAE2008 and REF2014 on the grounds that the total expenditure remains low relative to the amount of money that is allocated on its basis. We are reassured that REF costs amount to “less than 1%” of total public expenditure on research and “roughly 2.4% of the £102 billion in research funds expected to be distributed by the UK’s funding bodies” over the next six years (p.1). This is spin.

The ratio of REF costs to total expenditures on research funding is irrelevant since Research Councils (who distribute a larger portion of the overall public research funding budget) allocate grants to individuals and teams, not HEIs, through a competitive process that has nothing to do with the REF. The ratio of REF costs to QR funding allocated through HEFCE and the other funding bodies is more relevant but the figure given is inaccurate, because QR expenditures that are not allocated through the REF (e.g. the charity support fund and business research element) are included in the 2.4% calculation. Once these are excluded the figure rises to around 3.3%. By comparison, “the funding bodies estimated the costs of the 2008 RAE in England to be around 0.5% of the value of public research funding that was subsequently allocated with reference to its results” (p. 5). If this is supposed to be a measure of cost-efficiency, REF2014 scores very much worse than RAE2008.

“The cost and burden of the REF,” says the review, “should be the minimum possible to deliver a robust and defensible process.” “Changes new to REF 2014,” it adds, “have been adopted where it is judged they can bring demonstrable improvements which outweigh the cost of implementing them” (p. 5, my emphasis).

The review does attempt to make the case that the benefits of including impact in the REF exceeded the additional costs of measuring it, noting that “it yielded tremendous insight into each institution’s wider social and economic achievements and was widely welcomed as both a platform for marketing and internal learning” (pp. 2-3).[3]  Otherwise the major benefits claimed for the REF—reputational dividend, provision of information that can be used for performance management and forward planning, etc.—are exactly the same as those previously claimed for the RAE.

It is notable that the review nowhere attempts to justify the single most important factor in the additional costs of REF2014 as compared with RAE2008, which was hugely greater time spent on staff selection driven by competition between HEIs.

Many have argued that the human costs of this competition are inordinately high (the review confesses that it “does not include an estimate of non-time related burdens on staff, such as the stress on staff arising from whether they would be selected for the REF,” p. 4). What is clear from the REF Accountability Review is that there is an exorbitant financial cost as well—both in absolute terms and in comparison to the RAE.

When considering the cost-effectiveness of the exercise, we would do well to remember that the considerable sums of money currently devoted to paying academics to sit on committees to decide which of their colleagues should be excluded from the REF, in the interest of securing their university a marginal (and in many cases misleading) advantage in the league tables, could be spent in the classroom, the library, and the lab.  The QR funding formula has set up a classic prisoner’s dilemma, in which what may appear to be “rational” behavior for ambitious research-intensive HEIs has increasingly toxic consequences for the system as a whole.

NOTES

[1] The review relies on figures for staff time spent on the REF provided by HEI research managers. It specifically asks managers to distinguish “between REF-related costs and normal (“business as usual”) quality assurance and quality management arrangements for research” (p. 11) and exclude the latter from their responses. I believe this distinction untenable insofar as UK HEIs’ university-level “arrangements” for “quality assurance and quality management” in research only take the forms they do because they have evolved under the RAE/REF regime. Where national research assessment regimes do not exist (like the US), central management and monitoring of research is not “normal” and “business as usual” looks very different. For example, it is far less common to find “research directors” at department level.

[2] The review’s figures of 934 academic assessors, each spending 533 hours (or 71 days) to assess 191,950 outputs, yield an average time of 2.59 hours available to read each output. From this we must deduct (1) time spent reading the 7000 impact case studies (avg. 7.5 per assessor) and all other elements of submissions (environment templates, etc.), and (2) time spent attending REF panel meetings. I would suggest this brings the time available for assessing each output down to under two hours. If we further assume that each output is read by a minimum of two assessors, individual assessors will spend, on average, under an hour reading each output. I would argue—especially in the case of large research monographs in the humanities and social sciences—that this is not enough to deliver the informed, “robust” assessment HEFCE claims (even assuming individual outputs are read by panelists with expertise in the specific area, which will often not be the case). Anyone reviewing a journal article submission, ms for a book publisher, or research grant application would expect to spend a good deal more time than this. In this context, the figure given in the review for the higher ratio of Research Council costs relative to the funds allocated on their basis (6%) may not indicate the superior cost efficiency of the REF, as the review implies (p. 1), so much as the relative lack of rigor of its evaluations of outputs.

[3] I have serious doubts as to the ways in which these perceived advantages (from the point of view of university managers) may come to drive research agendas at the expense of basic research, but that is not relevant to the present argument.

HEFCE letter

And the tapestry of lies, damned lies and statistics that is REF2014 keeps on unraveling.

I learn from this morning’s Twitterfeed that Dr Dan Lockton, of the Royal College of Art, and Professor Melissa Terras, Professor of Digital Humanities and Director of the UCL Centre for Digital Humanities, have received identical letters in response to requests under the Freedom of Information Act for HEFCE to disclose information held on them in connection with the REF.

Dr Lockton had asked to see “data held by HEFCE concerning myself or my work, as part of the REF, including any comments, assessments or other material.

HEFCE responded that they did not hold the information he was seeking and referred him to a FAQ on the REF website:

Can you provide the scores for my outputs that were submitted to the REF?

Individual outputs were assessed in order to produce the output sub-profiles for each submission. Once the sub-profiles were complete, the scores for individual outputs were no longer required and have been destroyed. In accordance with data protection principles, we no longer hold the scores for individual outputs as they constitute personal data, which should not be held for longer than required to fulfil their purpose.

When it first emerged that RAE2008 was making the same use of such Orwellian memory holes, an (anonymous) panelist explained to Times Higher Education that “It is for our own good. The process could become an absolute nightmare if departmental heads or institutions chose to challenge the panels and this information was available.[1]

HEFCE’s letter to Dr Lockton goes on to emphasize that:

The purpose of the REF is to assess the quality of research and produce outcomes for each submission in the form of sub-profiles and an overall quality profile. These outcomes are then used to inform funding, provide accountability for public investment and provide benchmarking information. The purpose of the REF is not to provide a fine-grained assessment of each individual’s contribution to a submission and the process is not designed to deliver this.

Yes, but. At the risk of appearing obtuse, I would have thought that when 65% of the overall quality profile rests on REF subpanels’ assessment of the quality of individuals’ outputs, we would expect “fine-grained assessment” of those outputs. Is this not why we have this cumbersome, time-consuming, expensive process of panel evaluation—as distinct, for instance, from using metrics—to begin with?

If it’s not fine-grained assessment, what sort of assessment is it?  And how can we trust it to provide a reliable basis for funding decisions, accountability for public investment, or benchmarking?

In the immortal words of Amy Winehouse, what kind of fuckery is this?

[1] Zoe Corbyn, ‘Panels ordered to shred all RAE records’. Times Higher Education, 17 April 2008.

 

1.

The rankings produced by Times Higher Education and others on the basis of the UK’s Research Assessment Exercises (RAEs) have always been contentious, but accusations of universities’ gaming submissions and spinning results have been more widespread in REF2014 than any earlier RAE. Laurie Taylor’s jibe in The Poppletonian that “a grand total of 32 vice-chancellors have reportedly boasted in internal emails that their university has become a top 10 UK university based on the recent results of the REF”[1] rings true in a world in which Cardiff University can truthfully[2] claim that it “has leapt to 5th in the Research Excellence Framework (REF) based on the quality of our research, a meteoric rise” from 22nd in RAE2008. Cardiff ranks 5th among universities in the REF2014 “Table of Excellence,” which is based on the GPA of the scores assigned by the REF’s “expert panels” to the three elements in each university’s submission (outputs 65%, impact 20%, environment 15%)—just behind Imperial, LSE, Oxford and Cambridge. Whether this “confirms [Cardiff’s] place as a world-leading university,” as its website claims, is more questionable.[3] These figures are a minefield.

Although HEFCE encouraged universities to be “inclusive” in entering their staff in REF2014, they were not obliged to return all eligible staff and there were good reasons for those with aspirations to climb the league tables to be more “strategic” in staff selection than in previous RAEs. Prominent among these were (1) HEFCE’s defunding of 2* outputs from 2011, which meant outputs scoring below 3* would now negatively affect a university’s rank order without any compensating gain in QR income, and (2) HEFCE’s pegging the number of impact case studies required to the number of staff members entered per unit of assessment, which created a perverse incentive to exclude research-active staff if this would avoid having to submit a weak impact case study.[4] Though the wholesale exclusions feared by some did not materialize across the sector, it is clear that some institutions were far more selective in REF2014 than in RAE2008.

Unfortunately data that would have permitted direct comparisons with numbers of staff entered by individual universities in RAE2008 were never published, but Higher Education Statistical Authority (HESA) figures for FTE staff eligible to be submitted allow broad comparisons across universities in REF2014. It is evident from these that selectivity, rather than an improvement in research quality per se, played a large part in Cardiff’s “meteoric rise” in the rankings. The same may be true for some other schools that significantly improved their positions, among them Kings (up to 7th in 2014 from 22= in 2008), Bath (14= from 20=), Swansea (22= from 56=), Cranfield (31= from 49), Heriot-Watt (33 from 45), and Aston (35= from 52=). All of these universities except Kings entered fewer than 75% of their eligible staff members, and Kings has the lowest percentage (80%) of any university in the REF top 10 other than Cardiff itself.

Cardiff achieved its improbable rank of 5th on the basis of a submission that included only 62% of eligible staff. This is the second-lowest percentage of any of the 28 British universities that are listed in the top 200 in the 2014-15 Times Higher Education World University Rankings (of these schools only Aberdeen entered fewer staff, submitting 52%). No other university in this cohort submitted less than 70% of eligible staff, and half (14 universities) submitted over 80%. Among the top schools, Cambridge entered 95% of eligible staff, Imperial 92%, UCL 91% and Oxford 87%.

Many have suggested that “research power” (which is calculated by multiplying the institution’s overall rounded GPA by the total number of full-time equivalent staff it submitted to the REF) gives a fairer indication of a university’s place in the national research hierarchy than GPA rankings alone. By this measure, Cardiff falls to a more credible but still respectable 18th. But when measured by “research intensity” (that is, GPA multiplied by the percentage of eligible staff entered), its rank plummets from 5th to 50th. To say that this provides a more accurate indication of its true standing might be overstating the case, but it certainly underlines why Cardiff does not belong among “world-leading” universities.  Cardiff doubtless produces some excellent research, but its overall (and per capita) performance does not remotely justify comparisons with Oxford, Cambridge, or Imperial—let alone Caltech, Harvard, Stanford, Princeton, MIT, UC-Berkeley and Yale (the other universities in the THE World University Rankings top 10).  In this sense the GPA Table of Excellence can be profoundly misleading.

“To their critics,” writes Paul Jump in Times Higher Education, “such institutions are in essence cheating because in reality their quality score reflects the work produced by only a small proportion of their staff.”[5] I am not sure the accusation of cheating is warranted, because nobody is doing anything here that is outside HEFCE’s rules. The problem is rather that the current REF system rewards—and thereby encourages—bad behavior, while doing nothing to penalize the most egregious offenders like Cardiff.

The VCs at Bristol (11= in the REF2014 GPA table) and Southampton (18=, down from 14= in 2008) might be forgiven for ruefully reflecting that they, too, might now be boasting that they are “a top ten research university” had they not chosen to submit 91% and 90% of their eligible faculty respectively—a submission rate that on any reasonable criteria (as distinct from HEFCE’s rules) should itself be regarded as a mark of research excellence. Measured by research intensity Bristol comes in at 5= (jointly with Oxford) and Southampton 8= (jointly with Queen’s University Belfast, which submitted 95% of its staff and is ranked 42= on GPA). Meantime the VCs at St Andrews (down from 14= to 21=, 82% of eligible staff submitted), Essex (11th to 35=, 82% submitted), Loughborough (28= to 49=, 88% submitted) and Kent (31= to 49=, 85% submitted) may by now have concluded that—assuming they hold onto their jobs—they will have no alternative other than to be much more ruthless in culling staff for any future REF.

2.

The latest Times Higher Education World University Rankings puts Cardiff just outside the top 200, in the 201-225 group—which places it 29= among UK universities, along with Dundee, Newcastle, and Reading. Taking GPA, research power and research intensity into account—as we surely should, in recognition that not only the quality of research outputs but the number and proportion of academic staff who are producing them are also necessary elements in evaluating any university’s overall contribution to the UK’s research landscape—such a ranking seems intuitively to be just about right.

I have shown elsewhere[6] that there was, in fact, a striking degree of overall agreement between the RAE2008 rankings and the Times Higher Education World University Rankings. Repeating the comparison for UK universities ranked in the top 200 in the THE World University Rankings for 2014-15 and the REF2014 GPA-based “Table of Excellence” yields similar findings. The data are summarized in Table 1.

 

Table 1: REF2014 performance of universities ranked in the top 200 in Times Higher Education World University Rankings 2014-15

University THE World University Rankings 2014-15 REF2014 ranking by GPA (RAE2008) REF2014 ranking by research power REF2014 ranking by research intensity Percentage of eligible staff submitted
1-50
Oxford 3 4 (4=) 2 5= 87
Cambridge 5 5 (2) 3 2 95
Imperial 9 2 (6) 8 3 92
UCL 22 8= (7) 1 4 91
LSE 34 3 (4=) 28 7 85
Edinburgh 36 11= (12) 4 12= 83
Kings 40 7 (22=) 6 17 80
50-100
Manchester 52 17 (8) 5 26= 78
Bristol 74 11= (14) 9 5= 91
Durham 83 20 (14=) 20 24= 79
Glasgow 94 24 (33=) 12 15 84
100-150
Warwick 103 8= (9) 15 11 83
QMUL 107 11= (13) 22 34= 74
St Andrews 111= 21= (14=) 22 16 82
Sussex 111= 40 (30) 34 42= 73
York 113 14= (10) 23 32 75
Royal Holloway 118 26= (24=) 40 31 77
Sheffield 121 14= (14=) 13 33 74
Lancaster 131 18= (20=) 26 29 77
Southampton 132 18= (14=) 11 8= 90
Leeds 146 21= (14=) 10 34= 75
Birmingham 148 31 (26) 14 23 81
150-200
Exeter 154 30 (28=) 21 19= 82
Liverpool 157 33 (40) 19 46= 70
Nottingham 171 26= (24=) 7 28 79
Aberdeen 178 46= (38) 29 57 52
UEA 198 23 (35=) 36 37 75
Leicester 199 53 (51) 24 39 78

Seven UK universities make the top 50 in the 2014-15 THE World University Rankings: Oxford, Cambridge, Imperial, UCL, LSE, Edinburgh, and Kings. Six of these are also in the REF2014 top 10, while the other (Edinburgh) is only just outside it at 11=. Four of the leading five institutions are same in both rankings (the exception being UCL, which is 8= in REF 2014), though not in the same rank order. Of 11 UK universities in THE top 100, only one is outside the REF top 20 (Glasgow, at 24th). Of 22 UK universities in THE top 150, only two (Birmingham, 31 in REF, and Sussex, 40 in REF) are outside REF top 30. Of the 28 UK universities in THE top 200, only two (Aberdeen at 46= and Leicester at 53) rank outside the REF top 40.

Conversely, only two universities in the REF2014 top 20, Cardiff at 6 and Bath at 14=, do not make it into the THE top 200 (their respective ranks are 201-225 and 301-350). Other universities that are ranked in the top 40 in REF2014 but remain outside the THE top 200 are Newcastle (26=), Swansea (26=), Cranfield (31), Herriot-Watt (33), Essex (35=), Aston (35=), Strathclyde (37), Dundee (38=) and Reading (38=).

Table 2 provides data on the performance of selected UK universities that submitted to REF2014 but are currently ranked outside the THE world top 200.

Table 2. REF2014 performance of selected UK universities outside top 200 in Times Higher Education World University Rankings 2014-15

University THE World University Rankings 2014-15 REF2014 ranking by GPA (RAE 2008) REF2014 ranking by research power REF2014 ranking by research intensity Percentage of eligible staff submitted
Cardiff 201-225 6 (22=) 18 50 62
Dundee 201-225 38= (40=) 39 49 68
Newcastle 201-225 26= (27) 16 26= 80
Reading 201-225 38= (42) 27 19= 83
Birkbeck 226-250 46= (33=) 48 30 81
Plymouth 276-300 66= (75=) 47 59 50
Bath 301-350 14= (20=) 35 34= 74
Bangor 301-350 42= (52=) 59 51 63
Essex 301-350 35= (11) 45 22 82
Aberystwyth 350-400 58= (45=) 51 46= 76
Aston 350-400 35= (52=) 69 60 43
Portsmouth 350-400 65 (68=) 55 80 27
Swansea 26= (52=) 42 42= 71
Cranfield 31= (49) 61 64 37
Heriot-Watt 33 (45) 44 53 57

Dundee, Newcastle and Reading only just miss the THE cut (they are all in the 201-225 bracket). While all three outscored Aberdeen and Leicester, who are above them in the THE rankings (in Leicester’s case, at 199, very marginally so) in the REF, only Newcastle does substantially worse in the THE rankings than in the REF. It is ranked 26= in the REF with Nottingham and Royal Holloway, ahead of Leicester (53), Aberdeen (46), Sussex (40), Liverpool (33), Birmingham (31) and Exeter (30)—all of which are in the top 200 in the THE World Rankings. While there was a yawning gulf between Essex’s RAE2008 ranking of 11th and its THE ranking in the 301-350 group, the latter does seem to have presaged its precipitous REF2014 fall from grace to 35=. Conversely, the THE’s inclusion of Plymouth in the 276-300 group of universities places it considerably higher than its RAE rank of 66= would lead us to expect. This is not the case with most of the UK universities listed in the lower half of the THE top 400. Birkbeck, Bangor, Aberystwyth and Portsmouth also all found themselves outside the top 40 in REF2014.

The greatest discrepancies between REF2014 and the THE World Rankings come with Cardiff (6 in REF, 201-225 in REF), Bath (14= in REF, 301-350 in THE), Swansea (26= in REF, not among THE top 400), Aston (35= in REF, 350-400 in THE), Cranfield and Heriot-Watt (31= and 33 respectively in REF, yet not among THE top 400). On the face of it, these cases flatly contradict any claim that THE (or other similar) rankings are remotely accurate predictors of REF performance. I would argue, on the contrary, that these are the exceptions that prove the rule. All these schools were prominent among those universities identified above who inflated their GPA by submitting smaller percentages of their eligible staff in REF2014. Were we to adjust raw GPA figures by research intensity, we would get a much closer match, as Table 3 shows.

Table 3. Comparison of selected universities performance in THE World University Rankings 2014-15 and REF2014 by GPA and research intensity.

University THE 2014-15 REF2014 intensity REF2014 GPA
Cardiff 201-225 50 6
Bath 301-350 34= 14=
Swansea 42= 26=
Aston 350-400 60 35=
Cranfield 64 31=
Heriot-Watt 53 33

The most important general conclusion to emerge from this discussion is that despite some outliers there is a remarkable degree of agreement between the top 40 in REF2014 and the top 200 in the THE 2014-15 World University Rankings, and the correlation increases the higher we go in the tables. Where there are major discrepancies, these are usually explained by selective staff submission policies.

One other correlation is worth noting at this point. All 11 of the British universities in the THE top 100 are members of the Russell Group, as are 10 of the 17 British universities ranked between 100-200. The other six universities in this latter cohort (St Andrews, Sussex, Royal Holloway, Lancaster, UEA, Leicester) were all members of the now-defunct 1994 Group. Only one British university in the THE top 200 (Aberdeen) belonged to neither the Russell Group nor the 1994 Group. Conversely, only two Russell Group universities, Newcastle and Queen’s University Belfast, did not make the top 200 in the THE rankings.[7] In 2013-14 Russell Group and former 1994 Group universities between them received almost 85% of QR funding. Here, too, an enormous amount of money, time, and acrimony seems to have been expended on a laborious REF exercise that merely confirms what THE rankings have already shown.

3.

The most interesting thing about this comparative exercise is that the Times Higher Education World University Rankings not only make no use of RAE/REF data, but rely on quantitative methodologies that have repeatedly been rejected by the British academic establishment in favor of the “expert peer review” that is supposedly offered by REF panels. THE gives 30% of the overall score for the learning environment, 7.5% for international outlook, and 2.5% for industry income. The remaining 60% is based entirely on research-related measures, of which “the single most influential of the 13 indicators,” counting for 30% of the overall THE score, is “the number of times a university’s published work is cited by scholars globally” as measured by the Web of Science. The rest of the research score is derived from research income (6%), ‘research output scaled against staff numbers’ (6%, also established through the Web of Science), and ‘a university’s reputation for research excellence among its peers, based on the 10,000-plus responses to our annual academic reputation survey’ (18%).

The comparisons undertaken here strongly suggest that such metrics-based measures have proved highly reliable predictors of performance in REF2014—just as they did in previous RAEs. To be sure, there are differences in the fine details of the order of ranking of institutions between the THE and REF, but in such cases can we be confident that it is the REF panels’ highly subjective judgments of quality that are the more accurate? To suggest there is no margin for error in tables where the difference in GPA between 11th (Edinburgh, 3.18) and 30th (Exeter, 3.08) is a mere 0.1 points would be ridiculous. I have elsewhere suggested (here and here) that there are in fact many reasons why such confidence would be totally misplaced, including lack of specialist expertise among panel members and lack of time for reading outputs in the depth required.[8] But my main point here is this.

If metrics-based measures can produce similar results to those arrived at through the REF’s infinitely more costly, laborious and time-consuming process of ‘expert review’ of individual outputs, there is a compelling reason to go with the metrics; not because it is necessarily a valid measure of anything but because it as reliable as the alternative (whose validity is no less dubious for different reasons) and a good deal more cost-efficient. The benefits for collegiality and staff morale of universities not having to decide who to enter or exclude from the REF might be seen as an additional reason for favoring metrics. I am sure that if HEFCE put their minds to it they could come up with a more sophisticated basket of metrics than Times Higher Education, which would be capable of meeting many of the standard objections to quantification.  Supplementing the Web of Science with Publish or Perish or other citation indices that capture books as well as articles might be a start.  I hope James Wilsdon’s committee will come up with some useful suggestions for ways forward.

[1] Laurie Taylor, “We have bragging rights!” in The Poppletonian, Times Higher Education, 8 January 2015.

[2] Well, not quite. Cardiff is actually ranked 6th in the REF2014 “Table of Excellence,” which is constructed by Times Higher Education on the basis of the grade point average (GPA) of the marks awarded by REF panels, but the #1 spot is held not by a university but the Institute of Cancer Research (which submitted only two UoAs). This table and others drawn upon here for “research power” and “research intensity” are all compiled by Times Higher Education.

[3] “REF 2014,” Cardiff University website at http://www.cardiff.ac.uk/research/impact-and-innovation/quality-and-performance/ref-2014

[4] Paul Jump, “Careers at risk after case studies ‘game playing’, REF study suggests.” Times Higher Education, 22 January 2015.

[5] Paul Jump, “REF 2014 rerun: who are the ‘game players’?” Times Higher Education, 1 January 2015.

[6] See Derek Sayer, Rank Hypocrisies: The Insult of the REF. London: Sage, 2014.

[7] I have discussed Newcastle already. Queen’s came in just outside the REF top 40 (42=) but with an excellent intensity rating (8=, 95% of eligible staff submitted).

[8] See, apart from Rank Hypocrisies, my articles “One scholar’s crusade against the REF,” Times Higher Education, 11 December, 34-6; “Time to abandon the gold standard? Peer Review for the REF Falls Far Short of Internationally Acceptable Standards,” LSE Impact of Social Sciences blog, 19 November (reprinted as “Problems with peer review for the REF,” CDBU blog, 21 November).

Sayer_Rank_SWIFTS copy

UPDATE.  There is a (much cheaper) Kindle edition of this book to come soon, but it is not currently listed on amazon or other sites.

Seems oddly appropriate that I should be celebrating my 64th birthday with a new book that attacks a centerpiece of the unholy alliance of neoliberalism and Old Corruption that has been running and ruining British universities for the last thirty years.  Published December 3, 2015.  Lets hope it has “impact”! For more details see here.

Time to abandon the gold standard? Peer review for the REF falls far short of internationally accepted standards.

The REF2014 results are set to be published next month. Alongside ongoing reviews of research assessment, Derek Sayer points to the many contradictions of the REF. Metrics may have problems, but a process that gives such extraordinary gatekeeping power to individual panel members is far worse. Ultimately, measuring research quality is fraught with difficulty. Perhaps we should instead be asking which features of the research environment (a mere 15% of the assessment) are most conducive to a vibrant research culture and focus funding accordingly.  [LSE Impact Blog, 19 November 2014]

As a result of my posts on this blog last year relating to Britain’s Research Excellence Framework (see especially here and here), I was invited to write a short book to inaugurate the new “Sage Swifts” series.

Rank Hypocrisies: The Insult of the REF will be published on December 3, 2014: a couple weeks before HEFCE is due to publish the REF results.

But today’s announcement that HEFCE is actively “exploring the benefits and challenges of expanding … the Research Excellence Framework (REF), on an international basis” with a view to “an extension of the assessment to incorporate submissions from universities overseas” suggests some advance publicity might not be untimely.  For the REF should come with a health warning.

What I find most chilling in today’s HEFCE announcement is the bald assertion (in the accompanying survey) that “The UK’s research assessment system has a positive international reputation, built on a methodology developed over more than 20 years.”

Rank Hypocrisies shows on the contrary that the procedures used to evaluate outputs by Britain’s REF panels make a mockery of peer review as understood within the international academic community.  Among the issues discussed are the narrow disciplinary remit of REF panels and their inability to evaluate interdisciplinary research, the risks of replication of entrenched academic hierarchies and networks inherent in HEFCE’s procedures for appointment of panel members, the utterly unrealistic volume of work expected of panelists, the perversity of excluding all external indicators of quality from many assessments, and the lack of competence of REF panels to provide sufficient diversity and depth of expertise to evaluate the outputs that fall under their remit.

The REF is a system in which overburdened assessors assign vaguely defined grades in fields that are frequently not their own while (within many panels) ignoring all external indicators of the academic influence of the publications they are appraising, then shred all records of their deliberations.  That HEFCE should now be seeking to extend such a “methodology” beyond Britain’s shores is risible.

*

Derek Sayer’s book is essential reading for all university researchers and research policy makers. It discusses the waste, biases and pointlessness of Britain’s Research Excellence Framework (REF), and its misuse by universities. The book is highly readable, astute, sharply analytical and very intelligent. It paints a devastating portrait of a scheme that is useless for advancing research and that does no better job at ranking research performance than do the global indexes but does so for a huge cost in time, money, duplication, and irritation. Anyone interested in research ranking, assessment, and the contemporary condition of the universities should read this book.

Peter Murphy, Professor of Arts and Society, James Cook University

Rank Hypocrisies offers a compellingly convincing critique of the research auditing exercise to which university institutions have become subject. Derek Sayer lays bare the contradictions involved in the REF and provides a forensic analysis of the problems and inconsistencies inherent in the exercise as it is currently constituted. A must read for all university academic staff and the fast multiplying cadre of higher education managers and, in particular, government ministers and civil servants in the Department of Business Innovation and Skills.

Barry Smart, Professor of Sociology, University of Portsmouth

Academics across the world have come to see the REF – and its RAE predecessor – as an arrogant attempt to raise national research standards that has resulted in a variety of self-inflicted wounds to UK higher education. Derek Sayer is the Thucydides of this situation. A former head of the Lancaster history department, he fell on his sword trying to deal with a university that behaved in an increasingly irrational manner as it tried to game a system that is fundamentally corrupt in both its conception and execution. Rank Hypocrisies is more than a cri de coeur. It is the best documented diagnosis of a regime that has distorted the idea of peer review beyond recognition. Only someone with the clear normative focus of a former insider could have written this work. Thucydides would be proud.”

Steve Fuller, Auguste Comte Chair in Social Epistemology, Warwick University

Sayer makes a compelling argument that the Research Excellence Framework is not only expensive and divisive, but is also deeply flawed as an evaluation exercise. Rank Hypocrisies is a rigorous and scholarly evaluation of the REF, yet written in a lively and engaging style that makes it highly readable.

Dorothy Bishop, Professor of Developmental Neuropsychology and Wellcome Principal Research Fellow, University of Oxford

The REF is right out of Havel’s and Kundera’s Eastern Europe: a state-administered exercise to rank academic research like hotel chains – 2 star, 3 star – dependent on the active collaboration of the UK professoriate. In crystalline text steeped in cold rage, Sayer takes aim at the REF’s central claim, that it is a legitimate process of expert peer review. He provides a short history of the RAE/REF. He critiques university and national-level REF processes against actual practices of scholarly review as found in academic journals, university presses, and North American tenure procedures. His analysis is damning. If the REF fails as scholarly review, how can academics and universities continue to participate? And how can government use its rankings as a basis for public policy?

Tarak Barkawi, Reader in the Department of International Relations, London School of Economics

  More details of the book (which will be available in hardback and electronic formats) may be found here.

Professor Paolo Palladino, whose (so far unanswered) Open Letter to the Vice-Chancellor and management of Lancaster University following his exclusion from the 2014 REF was reported in my earlier post Kafkarna continues: REF gloves off at Lancaster University, has now written a long piece on the UCU RefWatch website on “Why the REF is bad for the very idea of the university.”  

This is how it ends:

“I have asked … for formal confirmation that I am not failing to meet any of my responsibilities as a member of the Department of History. I have also asked for confirmation that, in future years, the balance of my teaching, research and administration, as reflected in the workload allocation model, will not be outwith departmental norms, and that I will continue to benefit from the mechanisms within the Department of History and the Faculty of Arts and Social Sciences to support the engagement of individual staff in academic research and bids for external funding. No formal acknowledgment or response to the request has yet been received. I have spoken to my Head of Department about this and all that he could do was to smile knowingly about the absurdity of our predicament. In the meantime, my sense is that what will happen next, and, in some sense, this is already happening within the research councils and related charities, is that interdisciplinary research will be conflated evermore with multidisciplinary research, so that collaboration between academics in different disciplines will be regarded as delivering ‘interdisciplinary’ inquiry. There are far from insignificant costs to this semantic transformation because the individual scholar thus ceases to be the site of interdisciplinary inquiry and testing of the foundations upon which each discipline rests. Exercises such as REF are deceptive because what they reward is that which is familiar and conforms to the most widely shared expectations of what counts as knowledge, not that which challenges us to think deeply about who we are and what we do. In so doing, these exercises fail to live up to the very idea of the university and its one unflinching command to each one of us, to ‘dare to think’. I leave it to you to consider what might be the long-term implications of the failure to encourage such critical reflection among those students we are called upon to prepare for the challenge of creating a more just and more humane society.”

The full article can be accessed here.

The most sensible thing I’ve read on today’s UK universities strike day:

“I am a university lecturer. I teach English. I have been struggling of late to make sense of a workplace whose principles run counter to what I believe a university should be and what it should be for: the pursuit of learning, of research and scholarship into science, into society, into culture, of dissemination of knowledge that has a direct social and political function, an understanding of the world that helps people make better lives, personally and collectively: NOT a machine for making money, NOT a business, NOT a provider of services for customers, NOT a place which comes to represent the destructive and amoral principles of neo-liberal, marketised capitalism.

My own profession has been supine for far too long. It has stood by while its own members have been disciplined under RAE and REF, have been turned into entrepreneurs whose time is taken up with (increasingly futile) grant bids, who have been pacified and made grateful for a declining share in the fruits of their own productivity; who fought nowhere near hard enough against student loans, and their increase to £9000 a year; who fail to make common cause with their own student body and the administrative and support staff who enable their working lives.”

Read more here.

Thanks to Mark Jackson for bringing this excellent blog to my notice.  We need more like that.

Footnote.  Only after posting this did I realize that the blog’s author is a colleague at Lancaster University, albeit in a different department.  A curious coincidence.