great arch cover

Philip Corrigan and I published The Great Arch in 1985.  It was an iconoclastic book, which met with decidedly mixed reviews.  The core of the argument, developed through a narrative of English history spanning the tenth to the early twentieth centuries, was this:

Moral regulation is coextensive with state formation, and state forms are always animated and legitimated by a particular moral ethos.   Centrally, state agencies seek to give unitary and unifying expression to what are in reality multifaceted and differential historical experiences of groups within society, denying their particularity.  The reality is that bourgeois society is systematically unequal, it is structured along lines of class, gender, ethnicity, age, religion, occupation, locality.  States act to erase the recognition and expression of these differences through what should properly be conceived of as a double disruption.

            On the one hand, state formation is a totalizing project, representing people as members of a particular community—an “illusory community,” as Marx described it.  This community is epitomized as the nation, which claims people’s primary social identification and loyalty (and to which, as is most graphically illustrated in wartime, all other ties are subordinated).  Nationality, conversely, allows categorization of “others”—within as well as without (consider the House Un-American Activities Committee during the McCarthyite era in the United States, or Margaret Thatcher’s identification in 1984 of striking miners as “the enemy within”)—as “alien.”  This is a hugely powerful repertoire and rhetoric of rule.  On the other hand, as Foucault has observed, state formation equally (and no less powerfully) individualizes people in quite definite and specific ways.  We are registered within the state community as citizens, voters, taxpayers, ratepayers, jurors, parents, consumers, homeowners—individuals.  In both aspects of this representation alternative modes of collective and individual identification (and comprehension), and the  social, political and personal practices they could sustain, are denied legitimacy.  One thing we hope to show in this book is the immense material weight given to such cultural forms by the very routines and rituals of state.   They are embodied in the former and broadcast in the latter, made to appear as—to quote Herbert Butterfield on the Whig interpretation of history—”part of the landscape of English life, like our country lanes or our November mists or our historic inns.” 

While I would now (of course) no longer endorse all of the historical specifics of the analysis Philip and I presented the best part of thirty years ago, I see little to quarrel with in this as a starting-point for demystifying “the state” and what is done to people under its auspices and in its name.  The omission of different sexualities as a key axis of structured inequality, maybe.  But anybody interested in where I would nowadays want to qualify the (overly) grand narrative of English history offered in The Great Arch might want to look here.

For good or ill, The Great Arch has had considerable influence on thinking about “the state” across humanities and social science disciplines over the last three decades.  Though Blackwell issued a (very small print-run) second edition in 1991, the book has long been out of print and copies sell for ridiculous prices secondhand (one is listed today on amazon.co.uk for £336.64).  Since the rights have now reverted to the authors, I decided to put it up on academia.edu, where it can be legally downloaded free.  The text is that of the first (1985) edition.

Update, October 16.  I have censored this post at the insistence of Professor Trevor McMillan, Pro Vice-Chancellor (research) at Lancaster University.  I indicate passages that have been altered or removed by angle brackets <>.  

Times Higher Education recently reported that at Leicester University, “The position of all staff eligible for the [2014] REF but not submitted will be reviewed. Those who cannot demonstrate extenuating circumstances will have two options. Where a vacancy exists and they can demonstrate ‘teaching excellence’, they will be able to transfer to a teaching-only contract. Alternatively, they may continue on a teaching and research contract subject to meeting ‘realistic’ performance targets within a year.  If they fail to do so, ‘the normal consequence would be dismissal on the ground of unsatisfactory performance’” (08/08/2013, see full article here).   We live in interesting times.

1.   How it works—the context and the stakes

Having worked in North America from 1986 to 2006, when I took up a Chair in Cultural History at Lancaster University, I have spent most of my academic career in blissful ignorance of the peculiarly British institution that used to be called the RAE (Research Assessment Exercise) but has recently—and in good Orwellian fashion—been renamed the REF (Research Excellence Framework).   Many of my UK colleagues have known no academic life without such state surveillance.  But in most other countries, decisions on university funding are made without going through this time-consuming, expensive, and intellectually questionable audit of every university’s “research outputs” every five or six years.   Their research excellence has not conspicuously suffered in its absence.

The United States—whose universities currently occupy 7 of the top 10 slots in the THE World University Rankings—have no equivalent of the RAE/REF. This does not mean that research quality, whether of individuals or of schools and departments, is not evaluated.   It is continually evaluated, as anybody who has been through the tenure and promotion processes at a decent North American university, which are generally far more arduous (and rigorous) than their UK counterparts, will know.  But the relevant mechanisms of evaluation are those of the profession itself, not the state.  The most important of these are integrally bound up with peer-reviewed publication in top-drawer journals or (in the case of monographs) with leading university presses.  Venue of publication can be treated as an indicator of quality because of the rigor of the peer review processes of the top academic journals and publishers, and their correspondingly high rates of rejection.  Good journals typically use at least two reviewers (to counteract possible bias) per manuscript, who are experts in their fields, and the process of review is “double-blind”—i.e., the reviewer does not know the author’s identity and vice versa.  After an article or book has been published, citations and reviews provide further objective indicators of a work’s impact on an academic field.   The upshot is a virtuous (or, depending on your point of view, vicious) circle in which schools like CalTech, MIT, Princeton, or Harvard can attract the best researchers as evidenced mainly by their publication records, who will in turn bring further prestige and research income to those schools, maintaining their pre-eminence.

What immediately strikes anyone accustomed to the North American academy about the British RAE/REF is that at least in the humanities—which are my concern here—such quasi-objective indicators of research quality have been purposely ignored, in favor of entirely subjective evaluative procedures at every level.[1]  All those entered in the 2014 REF by their universities are required to submit four published “outputs” to a disciplinary sub-panel, whose members will then read and grade these outputs on a 4-point scale.   These scores account for 60 per cent of the overall ranking given to each “unit of assessment” (UoA), which will usually, but not always, be a university department.   The other 40 per cent comes from scores for “environment” (which includes PhD completions, external research income, conferences and symposia, marks of “esteem,” etc.) and “impact” (on the world at large, whose measurement has been the subject of considerable more or less entertaining debate), assigned by the same sub-panel.

Disciplinary sub-panels have some latitude in how they evaluate outputs, and in the natural sciences standing of journals and numbers of citations are likely to be seen as important indicators of quality.  The History Sub-panel has made it clear that it will not take into account venue of publication, citations, or reviews—so an article published in American Historical Review will be treated exactly the same as something posted on a personal website, as long as it is a published work.  The panel has also indicated that as a rule—and unlike with a typical book or journal article in the “real-world” peer review process that the panel has chosen to ignore—only one member of the panel will read each output.  While attempts will be made to find the best fit between the submitted outputs and the panel members’ personal scholarly expertise, in view of the size of the panel and the volume of material to be read this cannot always by any means be guaranteed.

In sum, in History at least—and I suspect across the humanities more generally—60 per cent of every UOA’s ranking will be dependent on the subjective opinion of just one panel member, who may very well not be an expert in the relevant field.   The criteria the panel intends to use for scoring outputs’ quality are (1) originality, (2) significance, and (3) rigor.  It is beyond me to see how competent judgments on these can be made by anyone who is not an expert in a field.   It is easy, on the other hand, to see why every university in the land was so desperate to get their people on REF committees.

For the stakes are high.  The aggregate REF score for each UOA determines (1) the ranking of that UoA relative to others in the same discipline across the country, and (2) the amount of research funding the university will receive for that UoA from the Higher Education Funding Councils until the next REF.   Both are critical for any school that has aspirations to be a research university, since—as all good sociologists know—what is defined as real is real in its consequences.

2.   What’s new in REF 2014—University-level assessment of outputs

In this context, it is important to note that—unlike in earlier RAEs—outputs scored at below 3* now attract no financial reward.  In previous RAEs it was in the interests of all universities to submit as high a proportion as possible of eligible staff with the intention of benefiting from a multiplier effect, and university websites boasted of the high proportions of their faculty that were “research active” as indexed by submission in successive RAEs.  In the REF, outputs scored at below 3* (defined as “Quality that is internationally excellent in terms of originality, significance and rigour but which falls short of the highest standards of excellence”) now dilute ranking relative to competitor institutions without any compensating gain in income.  The last thing any UoA wants is for the GPA that would otherwise be achieved by its 3* and 4* outputs to be reduced by a long “tail” of 2* and 1* valuations.

From the point of view of maximizing financial and reputational returns from the REF, the rational strategy for every research university is to exclude outputs ranked at 2* (“Quality that is recognised internationally in terms of originality, significance and rigour”) or below from entering a UoA’s submission, even if this means that fewer faculty are entered in the REF.  Sub-panels have also urged universities to be more selective in their submissions than in earlier RAEs, to ease their own workload.  All that material to read, often out of their fields of personal interest and expertise.  Why not sub-contract out the routine stuff?

Individual universities have responded differently to these pressures (and many, including Lancaster, have been understandably cagey about their plans).  But it now seems pretty clear that most, if not all schools with ambitions to be regarded as research universities are going to be far more selective than in previous RAEs in who they submit.  Lancaster warns on its internal staff website:

Lancaster University is aiming to maximise its ranking in each UoA submission so final decisions [on who is submitted] will be based on the most advantageous overall profile for the University. It is anticipated that the number of staff submitted to REF2014 will be below the 92% submitted for RAE2008 and not all contractually REF-eligible staff will be submitted.”

Rumor has it that the target figure for submission may in fact be as low as 65%., but that is, of course, only rumor.

Ironically, one of the reasons first advanced by the funding councils for shifting from the RAE to the REF format was “to reduce significantly the administrative burden on institutions in comparison to the RAE.”  The exact opposite has now happened.  Universities have had little alternative but to devise elaborate internal procedures for judging the quality of outputs before they are submitted for the REF, in order to screen out likely low-scoring items.  Many schools have had full-scale “mock-REF” exercises, in Lancaster’s case a full two years before the real thing, on the basis of which decisions about who will or will not be submitted in 2014 are being made.  The time and resources—which might otherwise have been spent actually doing research—that have been devoted to these perpetual internal evaluation exercises, needless to say, have been huge.

But more important, perhaps, is the fact that the whole character of the periodic UK research audit has significantly changed with the shift from the RAE to the REF, in ways that both jeopardize its (already dubious) claims to rigor and objectivity and could potentially seriously threaten the career prospects of individuals.  For the nationally uniform (at least within disciplines) and relatively transparent processes for evaluating outputs by REF Sub-panels are now supplemented by the highly divergent, frequently ad hoc, and generally anything but transparent internal procedures for prior vetting of outputs at the level of the individual university.   A second major irony of the REF is that if people at Leicester—or elsewhere—are fired or put on teaching-only contracts because they were not entered in the university’s submission, the assessments of quality upon which that decision was taken will have been arrived at entirely outside the REF system.

In previous RAEs all decisions on the quality of outputs were taken by national panels comprised of established scholars in the relevant discipline and constituted with some regard to considerations of representativeness and diversity, even if (as I have argued above) their evaluative procedures still left much to be desired.  In the REF, key decisions on quality of outputs are now taken within the individual university, with no significant external scrutiny, before these outputs can enter the national evaluative process at all.

3.  Down on the ground—enter Kafka

As I said earlier, most universities are playing their cards very close to their chests on what proportion of faculty they intend to enter into the REF and how they will be chosen.  I therefore cannot say how typical my own university’s procedures are.  I have no reason to think they are especially egregious in comparison to others’, but they still, in my view, leave considerably cause for concern.

The funding councils require every university to have an approved Code of Practice that ensures “transparency and fairness in the decision making process within the University over the selection of eligible staff for submission into the REF.”   Lancaster’s Code of Practice is published on the university website.  It is long on promises and short on detail, with all the wriggle-room one expects of such documents.  It says: “Decisions regarding the University submission to the REF will lie with the Vice-Chancellor on the advice of the REF Steering Group.  No other group will be formally involved in the selection of staff to be returned.”[2]  The Steering Group (whose composition is specified in an appendix) is tasked inter alia with adopting “open and transparent selection criteria,” ensuring “that selection for REF submissions do not discriminate on the grounds of age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex and sexual orientation,” and detailing “an appeal process that can be used by all members of eligible staff in order to seek further consideration for submission.”  All good stuff, though it is not discrimination on these grounds that most of my colleagues are worried about so much as the university’s ability to come up with procedures that will deliver informed and fair evaluations of their diverse work.  But what actually happens?

History department members were asked to identify four outputs for potential submission to the REF together with one or more “reserves.”  These items were then all read by a single external reader, who was originally hired by the Department in an advisory capacity as a “critical friend”—<removed>—but who has since been appointed by the university as an external assessor for REF submissions.   This assessor ranks every potential History UoA output on the four-point REF scale.  As far as I understand the procedure,[3] his ranking will be accepted by a Faculty-level Steering Group as definitive, unless he himself indicates that he does not feel competent to evaluate a specific output.  In that case—and in that case only—the output will be sent for a second reading by an independent specialist.   We are not told who chooses second readers or on what criteria; and colleagues have not been consulted on who might appropriately be approached for informed and objective assessments of their work.  All specialist readers remain anonymous.[4]

History is an enormously diverse discipline, in terms of chronology (at Lancaster we have historians of all periods from classical antiquity to the 20th century), geography (we have historians of Britain, several parts of Europe, the Middle East, India, South-East Asia, the Caribbean, and the United States), subject matter (economic, political, social, cultural, etc.), and methodology.  To expect any one historian, no matter how eminent, to be able to judge the quality of research across all these fields is absurd.   At least when outputs are submitted to REF Sub-panels there is a reasonable chance that the person who winds up grading them might have some expertise in or at least knowledge of the field.  The History Sub-panel has 24 members and a further 14 assessors.  Lancaster’s procedure, by contrast, guarantees that decisions as to whether individuals are submitted in the REF at all depend, in most cases, on the recommendation of the same individual, who is very likely not to be technically qualified to evaluate the quality of the work in question.  This is not a criticism of this particular assessor; no one individual could possibly be qualified to assess more than a minority of outputs, given the diversity of the historical research produced within the UoA.  Again, the relevant contrast is with peer reviewers for journals, who are chosen by the editors precisely for their specialist competence to assess a specific manuscript—not for their general eminence in the profession, or their experience of sitting on REF panels.[5]

I find it poignant that so cavalier an attitude toward evaluating the research of colleagues should be adopted in a university that requires external examiners for PhDs to be “an experienced member of another university qualified … to assess the thesis within its own field” and—unlike in North America—also requires all undergraduate work to be both second-marked internally and open to inspection by an external examiner before it can count toward a degree.   Why are those whose very livelihood depends on their research—and its reputation for quality—not given at least equivalent consideration as the students they teach?

A particular casualty of this approach is likely to be research that crosses disciplines—something, in other contexts, both Lancaster University and the Research Councils have been keen to pay lip-service to—or that is otherwise not mainstream (and as such may be cutting-edge).  Given the stakes in the REF, it always pays universities to be risk-averse.   Outputs may be graded below 3* not because their quality is in doubt but because they are thought to be marginal or unconventional with regard to the disciplinary norms of the REF Sub-panel, and what may be the most adventurous researchers excluded from the submission.

Though there is a right of appeal for individuals who feel they have been unfairly treated in terms of application of the procedure, Lancaster’s Code of Practice is crystal clear that: “The decision on the inclusion of staff to the REF is a strategic and qualitative process in which judgements are made about the quality of research of individual members of staff.  The judgements are subjective, based on factual information. Hence, disagreement with the decision alone would not be appropriate grounds for an appeal” (my emphasis).

This is the ultimate Kafkan twist.  The subjectivity of the process of evaluation is admitted, but only as a reason for denying any right of appeal against its decisions on substantive grounds.  These are exactly the grounds, of course, on which most people would want to appeal—not those of sexual or racial discrimination, though it might be argued that the secrecy (aka “confidentiality”) attached to these evaluations—we are not allowed to see external assessors’ comments—would make it impossible to prove discrimination in individual cases anyway.

4.   What comes next?

For whatever reasons, the funding councils for British universities have chosen to allocate that portion of their budget earmarked for research in ways that (at least in the humanities) systematically ignore the normal and well-established international benchmarks for judging the quality of research and publications.  Instead, they have chosen to set up their own panopticon of hand-picked disciplinary experts, whose eminence is not in doubt, but whose ability to provide informed assessments of the vast range of outputs submitted to them may well be questioned.  The vagaries of this approach have been exacerbated in the 2014 REF by putting in place a financial reward regime that incentivizes universities to exclude potential 2* and 1* outputs from submission altogether.  The resulting internal university REF procedures are not only enormously wasteful of time and money that could otherwise be spent on research.  More importantly, they compound the elements of subjectivity and arbitrariness already inherent in the RAE/REF system, ensuring that evaluations of quality on whose basis individuals are excluded from the REF are often not made by subject-matter experts.   Research that crosses disciplinary boundaries or challenges norms may be especially vulnerable in this context, because it is seen as especially “risky.”

Whatever this charade is, it is not a framework for research excellence.  If anything, it is likely to encourage conventional, “safe” research, while actively penalizing risk-taking innovation—above all, where that innovation crosses the disciplinary boundaries entrenched in the REF Sub-panels.  The REF is not even any longer a research assessment exercise, in any meaningful definition of that term, because so much of the assessment is now done within individual universities, in anything but rigorous ways, before it enters the REF proper at all.

Having made clear its intention to exclude from submission in the 2014 REF some of those faculty who would uncontentiously have qualified as “research-active” in previous RAEs, Lancaster University informs us that: “Career progression of staff will not be affected and there will not be any contractual changes or instigation of formal performance management procedures solely on the basis of not being submitted for REF2014.”  I hope the university means what it says.  But it is difficult to ignore the fact that Leicester University’s Pro-VC (as reported in THE) also reiterated that: “the university stands by its previously agreed ‘general principle’ that non-submission to the REF ‘will not, of itself, mean that there will be negative career repercussions for that person,’” even while spelling out his university’s intention to review the contractual positions of all non-submitted staff with a view to putting some on teaching-only contracts and firing others.

The emphasis on the weasel words is mine in both cases.  If universities truly intend that non-submission in the 2014 REF should not in any way negatively affect individuals’ career prospects, then what is to stop them saying so categorically and unambiguously?


[1] This is in part the result of a successful campaign by leading figures and organizations in humanities in the UK against “bibliometrics.”  While I accept that the expectations of journal ranking and citation patterns applicable to the natural sciences cannot simply be transferred wholesale to the humanities, I would also argue that an evaluative procedure that ignores all consideration of whether an output has gone through a prior process of peer review (and if so how rigorous), where it has been published, how it has been received, and how often and by whom it has been cited, throws the baby out with the bathwater.  It also gives an extraordinary intellectual gatekeeping power to those who constitute the REF disciplinary—in all senses—sub-panels, but that is an issue beyond the scope of this post.

[2] REF 2014 Code of Practice Lancaster V4 24 July 2013.  Quoted from LU website, accessed 9 August 2013.  My emphasis on formally.  So far as I can see, what this does is limit legal liability to the VC and a senior advisory committee, while immunizing those who are heavily involved in making the actual assessments of outputs, including, notably, departmental research directors and paid external assessors, against any potential litigation.

[3] I may be wrong on this point, but Faculty-level procedures have never been published—presumably because only the VC and university REF Steering Committee are formally involved in making decisions on who is submitted.

[4] This is the procedure for the History UoA.  Other UoAs at Lancaster may vary in points of detail, for instance in using internal as well as external readers.  I cannot discuss these differences, because these procedures have not been published either.  I doubt the variance would be such as to escape the general criticisms I am advancing here.

[5] Indeed, Lancaster’s Associate Dean (Research) for the Faculty of Science and Technology—where evaluating outputs is arguably easier than in the humanities anyway—has admitted that “some weaknesses in the mock REF exercise are apparent, for example in many cases there was only one external reviewer per department, no doubt with expert knowledge but not in all the relevant areas, who was engaged for a limited time only.”  Michael Koch, SciTech Bulletin #125.

I received this unsolicited email today, the latest of several of the same ilk.  The transformation of British academia into a used-car lot proceeds apace.

Dear Colleague,
Horizon Research Publishing,USA (HRPUB) is a worldwide open access publisher serving the academic research and scientific communities by launching peer reviewed journals covering a wide range of academic disciplines.
We invite you to submit your papers to our journals.


1. Free of Charge – Sept 30 2013 
All papers submitted before Sept 30, 2013 will be published free of charge.

2. High Quality, Rapid Production 
The time between initial screening and online publication is normally around 2 months.

3. Peer Review 
All new articles submitted must undergo a rigorous review. The review results will be notified within 30 days.

4. How to Submit 
Please submit your manuscript through our 
Online Manuscript Tracking System. Also, you can send your manuscript to submission@hrpub.org.
5. Register
Customer support is free for 
registered users .
6. Join Us as Reviewers
Please complete the 
membership application form  and send it to joinus@hrpub.org.


Horizon Research Publishing, USA