Benjamin Tallis and Derek Sayer

Czech PM-designate Andrej Babis (left) with Austrian People’s Party leader Sebastian Kurz. Wikimedia commons. 


The Iron Curtain may have been drawn back in 1989-1991, but you wouldn’t know it to read much of the commentary on the Czech parliamentary elections – and much recent commentary on ‘Eastern Europe’ more generally.

Much attention has been lavished on comparing Czech politician Andrej Babiš to Viktor Orban, Jaroslaw Kaczynski, Donald Trump and, more plausibly, to Silvio Berlusconi, but this has obscured deeper problems in western analyses of the region. Many of the sins laid at the door of central and eastern Europeans are no less prevalent in western countries, but this is too often lost amidst enduring Cold War stereotypes.

In a recent Op-Ed typical of this trend, Jochen Bittner charged that across the Visegrad Group (Poland, Hungary, the Czech Republic, and Slovakia), “leading politicians agitate against the European Union, portraying it as an imposing, undemocratic force.”

This is true. But populist politicians across western Europe portray the EU in exactly the same way. Marine Le Pen in France and Geert Wilders in the Netherlands both promised their electorate referendums on EU membership in hopes of emulating Brexit, whose champion was the anti-establishment politician and Donald Trump ally Nigel Farage …


Read full article in Open Democracy.

For those interested, I have just posted the full text of my first book, Marx’s Method: Ideology, Science and Critique in Capital (1st edition, 1979) on  As with other out-of-print books I have posted on the same site (The Violence of Abstraction, and Philip Corrigan and Derek Sayer, The Great Arch) it is selling for ridiculous prices second-hand.  All these texts can now be downloaded free of charge from


Brexit is a portmanteau
word borrowed without asking
from the Greek for clusterfuck
like the Elgin Marbles

Brexit is my half-cut father
propping up the public bar
nothing against our colored cousins
so long as they stay where they are

Brexit is not hearing foreign languages on the High Street
Brexit is not hearing foreign languages on the bus
Brexit is not hearing foreign languages

again for ever and ever amen

Brexit is Morrissey
kissing Nigel Farage’s magnificent ass
Brexit is Johnny Rotten remembering he is
white working class

Brexit is the future fucked over
by a red white and blue

but mostly white

dream of the past

Keep Calm and
Carry On Up The Khyber


I’m sorry I have been off here for a year.  I have been retiring (from full-time employment at a university), traveling, writing, and moving continents (from UK to Canada, where I am reacquainting myself with real winter in Calgary, Alberta).

If there is anybody still reading this blog, I’ve just posted a piece on (whose final version will be published next month in the Journal of Historical Sociology) on Brexit and Trump.  Please publicize if you like it.

Here is the abstract:

Following the victories of the “Brexit” camp in the UK’s 2016 referendum and Donald Trump in the 2016 US presidential election, a new explanatory narrative rapidly established itself. According to this view, which has been widely accepted on both the political left and right, we are witnessing a popular revolt against “elites” spearheaded by white working-class “victims of globalization.” Drawing on extensive polling and census data, this paper debunks this new consensus as an artifact of post-factual politics driven by feeling rather than evidence. These were not instances of a “misshapen class struggle” that sometimes assumed racist or xenophobic forms, but centrally a race war on the non-native Other that has successfully managed to pass itself off as a revolt of the (white) deprived and dispossessed.

Here is the link:



This is an Op-Ed piece I wrote for CEE New Perspectives, the companion blog of the academic journal New Perspectives which is published by the Institute of International Relations (IIR) in Prague.  I reproduce it here with permission.





Actually, my real album of the year was Butch Hancock’s The Wind’s Dominion, which was recorded back in 1979. I heard it for the first time only this year, after stumbling across an old vinyl copy in Reckless Records in Soho (London) and I couldn’t stop playing it.

I first came across Butch Hancock as one of the legendary West Texas band the Flatlanders along with Joe Ely and Jimmie Dale Gilmour, three high school buddies from Buddy Holly’s hometown of Lubbock, TX who headed to Austin to escape Jesus and Prohibition. Ely later toured with the Clash. We saw Joe touring with Terry Allen and Ryan Bingham at the New York City Winery a few years back, hunted down Jimmie Dale performing at Lucy’s Fried Chicken at SXSW 2014, and saw Butch hosting the annual Townes Van Zandt celebration at the Cactus Café in the Texas Union building at UT the same year.

Butch sang “The wind’s dominion” at Alejandro Escovedo’s “United Sounds of Austin” at the ACL Moody Theater on January 11, 2014. Joe Ely, Lucinda Williams, Rosie Flores, Terry Allen, Kimmie Rhodes, and “the situation we know as Roky Erickson” were among the many other contributors to an evening that showed why Austin bills itself as the world capital of live music.

“The Wind’s Dominion” album has been called “the West Texas Blonde on Blonde.” Enough said.

Unfortunately Hancock’s surreal masterpiece (check “Mario y Maria [cryin’ statues/spittin’ images]” or “Long road to Asia Minor”) can’t be included in my albums of the year because the qualification is that the album has to have been released—though not necessarily recorded—for the first time in 2015.  So here goes.  They’re all very good indeed.


 The top ten

1  Courtney Barnett—Sometimes I sit and think and sometimes I just sit

2  Benjamin Clementine—At least for now

3   Jason Isbell—Something more than free

4   Pops Staples—Don’t lose this

5   Bob Dylan—Shadows in the night

6   Titus Andronicus—The most lamentable tragedy

7   Kacey Musgraves—Pageant material

8   Ashley Monroe—The blade

9   Shovels and Rope—Busted jukebox volume 1

10   Sleater-Kinney—No cities to love


Honorable mentions

Keith Richards—Crosseyed heart

Iris Dement—The trackless woods

Drive By Truckers—Great to be alive!

 Neil Young and Bluenote Café

Kamasi Washington—The epic


Hors de concours

Bob Dylan—The cutting edge 1965-1966

Not by Derek Sayer.  Another excellent piece from Liz Morrish, comparing the scandalous workplace regime at Amazon exposed in today’s New York Times with what is rapidly becoming the new normal in UK universities (and likely to only get worse with the introduction of the TEF to complement the REF).

Academic Irregularities

The photo above made me start contemplating the intrusion of a repressive disciplinary culture into UK universities. Disciplinary action for tailgating? Whatever happened to having a quiet word with somebody? Just a few years ago, campus security was left in the capable hands of a few retirees from the services and the police. They knew academics and students by name, and exerted a calm authority refined through years of dealing with minor infractions. Now, a mere parking violation incurs a meeting with HR.

Many of us will be aware of new university policies on disciplinary procedures. If we have read them, we will be aware that the policies themselves are often not in the least repressive or out of kilter with professional expectations. It is when these policies intersect with over-zealous performance management procedures that things get troublesome – I have previously blogged about so-called under-performing professors 

So when I…

View original post 898 more words

“REF 2014 cost almost £250 million,” Times Higher Education recently reported. This is the first official figure for the exercise, taken from the REF Accountability Review: Costs, benefits and burden published by HEFCE on July 13. While this is much lower than Robert Bowman’s “guesstimate” of £1bn (which I personally believe to be based on more realistic costings of staff time),[1] it is still over four times HEFCE’s previously announced costs for RAE2008 of £47m for Higher Education Institutions [HEIs] and £12m for HEFCE. This increase should raise eyebrows, since the REF promised to reduce the costs of the RAE for HEIs. We are “as committed to lightening the burden as we are to rigour in assessing quality,” then HEFCE Chief Executive David Eastwood assured the sector back in November 2007.

It would be nice to know why the costs of submitting to the REF have risen so astronomically. We might also ask whether this huge increase in REF costs for HEIs has delivered remotely commensurate benefits.

How much more did REF2014 cost than RAE2008?

The REF Accountability Review calculates the total cost of REF2014 at £246m, of which £14m fell on the funding bodies (HEFCE and its counterparts for Scotland, Wales, and Northern Ireland) and £232m on HEIs. Around £19m of the latter (8%) was for REF panelists’ time—a figure I suggest is either a serious underestimate or the best indication we could have that the REF is not the “rigorous” process of research evaluation it purports to be.[2]  This leaves £212m (92%) as the cost of submission. The review accepts an earlier Rand Europe estimate of the cost of preparing impact submissions as £55m, or 26% of the £212m (p. 6). “All other costs incurred by HEIs” totaled £157m (p. 1).

Believing the £47m figure for RAE 2008 to be “a conservative estimate” (p. 17), the review revises it upward to £66m. Working with this new figure and discounting the cost of impact case studies (on grounds that they were not required in RAE2008) the review concludes: “the cost of submitting to the last RAE was roughly 43% of the cost of submitting to the REF” (p. 2). Or to put it another way, submission costs for RAE2014 were around 238% higher than for RAE2008 without taking into account the added costs of impact submissions.

The brief of the review was to consider “the costs, benefits and burden for HEIs of submitting to the Research Excellence Framework (REF)” (p. 4). While one can see the logic of excluding the cost of impact submissions for purposes of comparison the fact remains that HEIs did incur an additional £55m in real costs of preparing impact submissions, which were a mandatory element of the exercise. If impact is included in the calculation, as it should be, the increase in submission costs from RAE2008 to REF2014 rises to around 321%. In other words, the REF cost HEIs more than three times as much as the last RAE.

Why did REF2014 cost so much more than RAE2008?

In its comparison of the costs of the REF and the RAE the review lists a number of major changes introduced for REF2014 including reduction of the number of Units of Assessment [UOAs], revisions of definitions of A and C category staff, and introduction of an environment template (p. 14, figure 3). The 20 HEIs surveyed indicated that some of these had resulted in a decrease in costs while others were cost-neutral or entailed a “moderate increase” (less than 20%). Only two changes are claimed to have incurred a “substantial increase” (more than 20%) in costs.

“Interviewees and survey respondents suggested REF was more costly than RAE,” the review reports, “mainly because of the inclusion of the strand to evaluate the non-academic impact of research” (p. 12, my emphasis). Nearly 70% of respondents reported a substantial increase and another 20% a moderate increase in costs due to impact (p. 13). However, the REF Accountability Review‘s own figures show that this perception is incorrect. Impact only represented 26% of HEIs’ £212m submission costs. After subtracting £55m for impact (and discounting REF panelists’ time) the cost of preparing submissions still rose from £66m to £157m, i.e. by approximately £91m, or 238%, between RAE2008 and REF2014.

The review also singles out “the strengthening of equality and diversity measures, in relation to individual staff circumstances” as a factor that “increased the total cost of submission for most HEIs” (p. 12). This is the only place in the report where an item is identified as “a disproportionately costly element of the whole process” (pp. 2, 21, my emphasis). But if anything it is the attention devoted to this factor in the review that is disproportionate (and potentially worrying insofar as the review suggests “simplification” of procedures for dealing with special circumstances, p. 3).

While the work of treating disabled, sick, and pregnant employees equitably may have been “cumbersome” for HEI managers (p. 2), dealing with special circumstances “took an average 11% of the total central management time devoted to REF” and “consumed around 1% of the effort on average” at UOA level (p. 21). This amounts to £6m (or 4%) of HEIs’ £157m non-impact submission costs—a drop in the ocean.

We are left, then, with an increase in HEI submission costs between RAE2008 and REF2014 of around £85m that is not attributable to changes in HEFCE’s formal submission requirements.  To explain this increase we need to look elsewhere.

The review divides submission costs between central management costs and costs at UOA level. Of £46m central costs (excluding impact), £44m (or 96%) was staff costs, including the costs of REF management teams (56%) and time spent by senior academics on steering committees (18%). UOA-level costs were substantially greater (£111m). Of these, “87% … can be attributed to UOA review groups and academic champions and to submitted and not submitted academic staff” and £8m to support staff (p. 7). Staff time was thus overwhelmingly the most significant submission cost for HEIs.

When we look at how this time was spent, the review says “the REF element on research outputs, which included time spent reviewing and negotiating the selection of staff and publications” was “the main cost driver at both central management level and UOA level” (p. 2). At central management level this “was the most time-consuming part of the REF submission” (p. 18), with outputs taking up 40% of the time devoted to the REF. At UOA level 55% of the time devoted to REF—”the largest proportion … by a very substantial margin”—was “spent on reviewing and negotiating the selection of staff and publications” (p. 19). For HEIs as a whole, “the estimated time spent on the output element as a proportion of the total time spent on REF activities is 45% (excluding impact)” (p. 17).

The conclusion is inescapable. The principal reason for the increased cost of REF2014 over RAE2008 was NOT impact, and still less special circumstances, but the added time spent on selecting staff and outputs for submission.

Why did selecting staff and outputs cost so much more in REF2014?

The review obliquely acknowledges this in its recognition that larger and/or more research intensive HEIs devoted substantially more time to REF submission than others (p. 10) and that “several HEIs experienced submitting as particularly costly because preparing for the REF was organized as an iterative process. Some HEI’s ran two or three formal mock REFs, with the final [one] leading directly into the REF submission” (p. 27). For institutions that used them (which we can assume most research intensives did) mock REFs were “a significant cost” (p. 2).

But the review nowhere explains why this element should have consumed so much more staff time in REF2014 than it did in RAE2008. The most likely reason for the increase lies in a factor the review does not even mention: HEFCE’s changes to the QR funding formula (which controls how money allocated on the basis of the REF is distributed) in 2010-11, which defunded 2* outputs and increased the value of 4* outputs relative to 3* from 7:3 to 3:1. At that point, in the words of Adam Tickell, who was then pro-VC for research at the University of Birmingham, universities “had no rational reason to submit people who haven’t got at least one 3* piece of work.” More importantly, they had an incentive to eliminate every 2* (or lower) output from their submissions because 2* outputs would lower their ranking without any compensatory gain in income. Mock REF processes were designed for this purpose.

The single most significant driver of the threefold increase in costs between RAE2008 and REF2014 was not the introduction of impact or any other change to HEFCE’s rules for submission but competition between universities driven by HEFCE’s 2010-11 changes to the QR funding formula. The key issue here is less the amount of QR money HEIs receive from HEFCE than the prestige attached to ranking in the league tables derived from REF evaluations. A relatively small difference in an institution’s GPA can make a significant difference in its ranking.

The widespread gaming that has resulted has only served to further discredit the REF. Does anybody believe Cardiff University’s boast that it “has leapt to 5th in the Research Excellence Framework (REF) based on the quality of our research, a meteoric rise that confirms our place as a world-leading university” (my emphasis), when Cardiff actually achieved that rank by entering only 62% of eligible staff in its submission? This percentage is lower than all but one of the 28 British universities listed in the top 200 in the 2014-15 Times Higher Education World University Rankings (in which Cardiff is ranked a respectable but hardly “world-leading” 201st-225th, or 29= among the British institutions). As I have shown elsewhere, this is a systematic pattern that profoundly distorts REF results.

HEFCE would no doubt say that it does not produce or endorse league tables produced on the basis of the REF.  But to act as if it therefore had no responsibility for taking into account the consequences of such rankings is disingenuous in the extreme.

This competition between the research intensive universities is only likely to intensify given HEFCE’s further “tweaking” of the QR funding formula in February 2015, which changed the weighting of 3* to 4* outputs from 3:1 to 4:1.

Is the REF cost efficient?

The review defends the increase in costs between RAE2008 and REF2014 on the grounds that the total expenditure remains low relative to the amount of money that is allocated on its basis. We are reassured that REF costs amount to “less than 1%” of total public expenditure on research and “roughly 2.4% of the £102 billion in research funds expected to be distributed by the UK’s funding bodies” over the next six years (p.1). This is spin.

The ratio of REF costs to total expenditures on research funding is irrelevant since Research Councils (who distribute a larger portion of the overall public research funding budget) allocate grants to individuals and teams, not HEIs, through a competitive process that has nothing to do with the REF. The ratio of REF costs to QR funding allocated through HEFCE and the other funding bodies is more relevant but the figure given is inaccurate, because QR expenditures that are not allocated through the REF (e.g. the charity support fund and business research element) are included in the 2.4% calculation. Once these are excluded the figure rises to around 3.3%. By comparison, “the funding bodies estimated the costs of the 2008 RAE in England to be around 0.5% of the value of public research funding that was subsequently allocated with reference to its results” (p. 5). If this is supposed to be a measure of cost-efficiency, REF2014 scores very much worse than RAE2008.

“The cost and burden of the REF,” says the review, “should be the minimum possible to deliver a robust and defensible process.” “Changes new to REF 2014,” it adds, “have been adopted where it is judged they can bring demonstrable improvements which outweigh the cost of implementing them” (p. 5, my emphasis).

The review does attempt to make the case that the benefits of including impact in the REF exceeded the additional costs of measuring it, noting that “it yielded tremendous insight into each institution’s wider social and economic achievements and was widely welcomed as both a platform for marketing and internal learning” (pp. 2-3).[3]  Otherwise the major benefits claimed for the REF—reputational dividend, provision of information that can be used for performance management and forward planning, etc.—are exactly the same as those previously claimed for the RAE.

It is notable that the review nowhere attempts to justify the single most important factor in the additional costs of REF2014 as compared with RAE2008, which was hugely greater time spent on staff selection driven by competition between HEIs.

Many have argued that the human costs of this competition are inordinately high (the review confesses that it “does not include an estimate of non-time related burdens on staff, such as the stress on staff arising from whether they would be selected for the REF,” p. 4). What is clear from the REF Accountability Review is that there is an exorbitant financial cost as well—both in absolute terms and in comparison to the RAE.

When considering the cost-effectiveness of the exercise, we would do well to remember that the considerable sums of money currently devoted to paying academics to sit on committees to decide which of their colleagues should be excluded from the REF, in the interest of securing their university a marginal (and in many cases misleading) advantage in the league tables, could be spent in the classroom, the library, and the lab.  The QR funding formula has set up a classic prisoner’s dilemma, in which what may appear to be “rational” behavior for ambitious research-intensive HEIs has increasingly toxic consequences for the system as a whole.


[1] The review relies on figures for staff time spent on the REF provided by HEI research managers. It specifically asks managers to distinguish “between REF-related costs and normal (“business as usual”) quality assurance and quality management arrangements for research” (p. 11) and exclude the latter from their responses. I believe this distinction untenable insofar as UK HEIs’ university-level “arrangements” for “quality assurance and quality management” in research only take the forms they do because they have evolved under the RAE/REF regime. Where national research assessment regimes do not exist (like the US), central management and monitoring of research is not “normal” and “business as usual” looks very different. For example, it is far less common to find “research directors” at department level.

[2] The review’s figures of 934 academic assessors, each spending 533 hours (or 71 days) to assess 191,950 outputs, yield an average time of 2.59 hours available to read each output. From this we must deduct (1) time spent reading the 7000 impact case studies (avg. 7.5 per assessor) and all other elements of submissions (environment templates, etc.), and (2) time spent attending REF panel meetings. I would suggest this brings the time available for assessing each output down to under two hours. If we further assume that each output is read by a minimum of two assessors, individual assessors will spend, on average, under an hour reading each output. I would argue—especially in the case of large research monographs in the humanities and social sciences—that this is not enough to deliver the informed, “robust” assessment HEFCE claims (even assuming individual outputs are read by panelists with expertise in the specific area, which will often not be the case). Anyone reviewing a journal article submission, ms for a book publisher, or research grant application would expect to spend a good deal more time than this. In this context, the figure given in the review for the higher ratio of Research Council costs relative to the funds allocated on their basis (6%) may not indicate the superior cost efficiency of the REF, as the review implies (p. 1), so much as the relative lack of rigor of its evaluations of outputs.

[3] I have serious doubts as to the ways in which these perceived advantages (from the point of view of university managers) may come to drive research agendas at the expense of basic research, but that is not relevant to the present argument.