Update, October 16.  I have censored this post at the insistence of Professor Trevor McMillan, Pro Vice-Chancellor (research) at Lancaster University.  I indicate passages that have been altered or removed by angle brackets <>.  

Times Higher Education recently reported that at Leicester University, “The position of all staff eligible for the [2014] REF but not submitted will be reviewed. Those who cannot demonstrate extenuating circumstances will have two options. Where a vacancy exists and they can demonstrate ‘teaching excellence’, they will be able to transfer to a teaching-only contract. Alternatively, they may continue on a teaching and research contract subject to meeting ‘realistic’ performance targets within a year.  If they fail to do so, ‘the normal consequence would be dismissal on the ground of unsatisfactory performance’” (08/08/2013, see full article here).   We live in interesting times.

1.   How it works—the context and the stakes

Having worked in North America from 1986 to 2006, when I took up a Chair in Cultural History at Lancaster University, I have spent most of my academic career in blissful ignorance of the peculiarly British institution that used to be called the RAE (Research Assessment Exercise) but has recently—and in good Orwellian fashion—been renamed the REF (Research Excellence Framework).   Many of my UK colleagues have known no academic life without such state surveillance.  But in most other countries, decisions on university funding are made without going through this time-consuming, expensive, and intellectually questionable audit of every university’s “research outputs” every five or six years.   Their research excellence has not conspicuously suffered in its absence.

The United States—whose universities currently occupy 7 of the top 10 slots in the THE World University Rankings—have no equivalent of the RAE/REF. This does not mean that research quality, whether of individuals or of schools and departments, is not evaluated.   It is continually evaluated, as anybody who has been through the tenure and promotion processes at a decent North American university, which are generally far more arduous (and rigorous) than their UK counterparts, will know.  But the relevant mechanisms of evaluation are those of the profession itself, not the state.  The most important of these are integrally bound up with peer-reviewed publication in top-drawer journals or (in the case of monographs) with leading university presses.  Venue of publication can be treated as an indicator of quality because of the rigor of the peer review processes of the top academic journals and publishers, and their correspondingly high rates of rejection.  Good journals typically use at least two reviewers (to counteract possible bias) per manuscript, who are experts in their fields, and the process of review is “double-blind”—i.e., the reviewer does not know the author’s identity and vice versa.  After an article or book has been published, citations and reviews provide further objective indicators of a work’s impact on an academic field.   The upshot is a virtuous (or, depending on your point of view, vicious) circle in which schools like CalTech, MIT, Princeton, or Harvard can attract the best researchers as evidenced mainly by their publication records, who will in turn bring further prestige and research income to those schools, maintaining their pre-eminence.

What immediately strikes anyone accustomed to the North American academy about the British RAE/REF is that at least in the humanities—which are my concern here—such quasi-objective indicators of research quality have been purposely ignored, in favor of entirely subjective evaluative procedures at every level.[1]  All those entered in the 2014 REF by their universities are required to submit four published “outputs” to a disciplinary sub-panel, whose members will then read and grade these outputs on a 4-point scale.   These scores account for 60 per cent of the overall ranking given to each “unit of assessment” (UoA), which will usually, but not always, be a university department.   The other 40 per cent comes from scores for “environment” (which includes PhD completions, external research income, conferences and symposia, marks of “esteem,” etc.) and “impact” (on the world at large, whose measurement has been the subject of considerable more or less entertaining debate), assigned by the same sub-panel.

Disciplinary sub-panels have some latitude in how they evaluate outputs, and in the natural sciences standing of journals and numbers of citations are likely to be seen as important indicators of quality.  The History Sub-panel has made it clear that it will not take into account venue of publication, citations, or reviews—so an article published in American Historical Review will be treated exactly the same as something posted on a personal website, as long as it is a published work.  The panel has also indicated that as a rule—and unlike with a typical book or journal article in the “real-world” peer review process that the panel has chosen to ignore—only one member of the panel will read each output.  While attempts will be made to find the best fit between the submitted outputs and the panel members’ personal scholarly expertise, in view of the size of the panel and the volume of material to be read this cannot always by any means be guaranteed.

In sum, in History at least—and I suspect across the humanities more generally—60 per cent of every UOA’s ranking will be dependent on the subjective opinion of just one panel member, who may very well not be an expert in the relevant field.   The criteria the panel intends to use for scoring outputs’ quality are (1) originality, (2) significance, and (3) rigor.  It is beyond me to see how competent judgments on these can be made by anyone who is not an expert in a field.   It is easy, on the other hand, to see why every university in the land was so desperate to get their people on REF committees.

For the stakes are high.  The aggregate REF score for each UOA determines (1) the ranking of that UoA relative to others in the same discipline across the country, and (2) the amount of research funding the university will receive for that UoA from the Higher Education Funding Councils until the next REF.   Both are critical for any school that has aspirations to be a research university, since—as all good sociologists know—what is defined as real is real in its consequences.

2.   What’s new in REF 2014—University-level assessment of outputs

In this context, it is important to note that—unlike in earlier RAEs—outputs scored at below 3* now attract no financial reward.  In previous RAEs it was in the interests of all universities to submit as high a proportion as possible of eligible staff with the intention of benefiting from a multiplier effect, and university websites boasted of the high proportions of their faculty that were “research active” as indexed by submission in successive RAEs.  In the REF, outputs scored at below 3* (defined as “Quality that is internationally excellent in terms of originality, significance and rigour but which falls short of the highest standards of excellence”) now dilute ranking relative to competitor institutions without any compensating gain in income.  The last thing any UoA wants is for the GPA that would otherwise be achieved by its 3* and 4* outputs to be reduced by a long “tail” of 2* and 1* valuations.

From the point of view of maximizing financial and reputational returns from the REF, the rational strategy for every research university is to exclude outputs ranked at 2* (“Quality that is recognised internationally in terms of originality, significance and rigour”) or below from entering a UoA’s submission, even if this means that fewer faculty are entered in the REF.  Sub-panels have also urged universities to be more selective in their submissions than in earlier RAEs, to ease their own workload.  All that material to read, often out of their fields of personal interest and expertise.  Why not sub-contract out the routine stuff?

Individual universities have responded differently to these pressures (and many, including Lancaster, have been understandably cagey about their plans).  But it now seems pretty clear that most, if not all schools with ambitions to be regarded as research universities are going to be far more selective than in previous RAEs in who they submit.  Lancaster warns on its internal staff website:

Lancaster University is aiming to maximise its ranking in each UoA submission so final decisions [on who is submitted] will be based on the most advantageous overall profile for the University. It is anticipated that the number of staff submitted to REF2014 will be below the 92% submitted for RAE2008 and not all contractually REF-eligible staff will be submitted.”

Rumor has it that the target figure for submission may in fact be as low as 65%., but that is, of course, only rumor.

Ironically, one of the reasons first advanced by the funding councils for shifting from the RAE to the REF format was “to reduce significantly the administrative burden on institutions in comparison to the RAE.”  The exact opposite has now happened.  Universities have had little alternative but to devise elaborate internal procedures for judging the quality of outputs before they are submitted for the REF, in order to screen out likely low-scoring items.  Many schools have had full-scale “mock-REF” exercises, in Lancaster’s case a full two years before the real thing, on the basis of which decisions about who will or will not be submitted in 2014 are being made.  The time and resources—which might otherwise have been spent actually doing research—that have been devoted to these perpetual internal evaluation exercises, needless to say, have been huge.

But more important, perhaps, is the fact that the whole character of the periodic UK research audit has significantly changed with the shift from the RAE to the REF, in ways that both jeopardize its (already dubious) claims to rigor and objectivity and could potentially seriously threaten the career prospects of individuals.  For the nationally uniform (at least within disciplines) and relatively transparent processes for evaluating outputs by REF Sub-panels are now supplemented by the highly divergent, frequently ad hoc, and generally anything but transparent internal procedures for prior vetting of outputs at the level of the individual university.   A second major irony of the REF is that if people at Leicester—or elsewhere—are fired or put on teaching-only contracts because they were not entered in the university’s submission, the assessments of quality upon which that decision was taken will have been arrived at entirely outside the REF system.

In previous RAEs all decisions on the quality of outputs were taken by national panels comprised of established scholars in the relevant discipline and constituted with some regard to considerations of representativeness and diversity, even if (as I have argued above) their evaluative procedures still left much to be desired.  In the REF, key decisions on quality of outputs are now taken within the individual university, with no significant external scrutiny, before these outputs can enter the national evaluative process at all.

3.  Down on the ground—enter Kafka

As I said earlier, most universities are playing their cards very close to their chests on what proportion of faculty they intend to enter into the REF and how they will be chosen.  I therefore cannot say how typical my own university’s procedures are.  I have no reason to think they are especially egregious in comparison to others’, but they still, in my view, leave considerably cause for concern.

The funding councils require every university to have an approved Code of Practice that ensures “transparency and fairness in the decision making process within the University over the selection of eligible staff for submission into the REF.”   Lancaster’s Code of Practice is published on the university website.  It is long on promises and short on detail, with all the wriggle-room one expects of such documents.  It says: “Decisions regarding the University submission to the REF will lie with the Vice-Chancellor on the advice of the REF Steering Group.  No other group will be formally involved in the selection of staff to be returned.”[2]  The Steering Group (whose composition is specified in an appendix) is tasked inter alia with adopting “open and transparent selection criteria,” ensuring “that selection for REF submissions do not discriminate on the grounds of age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex and sexual orientation,” and detailing “an appeal process that can be used by all members of eligible staff in order to seek further consideration for submission.”  All good stuff, though it is not discrimination on these grounds that most of my colleagues are worried about so much as the university’s ability to come up with procedures that will deliver informed and fair evaluations of their diverse work.  But what actually happens?

History department members were asked to identify four outputs for potential submission to the REF together with one or more “reserves.”  These items were then all read by a single external reader, who was originally hired by the Department in an advisory capacity as a “critical friend”—<removed>—but who has since been appointed by the university as an external assessor for REF submissions.   This assessor ranks every potential History UoA output on the four-point REF scale.  As far as I understand the procedure,[3] his ranking will be accepted by a Faculty-level Steering Group as definitive, unless he himself indicates that he does not feel competent to evaluate a specific output.  In that case—and in that case only—the output will be sent for a second reading by an independent specialist.   We are not told who chooses second readers or on what criteria; and colleagues have not been consulted on who might appropriately be approached for informed and objective assessments of their work.  All specialist readers remain anonymous.[4]

History is an enormously diverse discipline, in terms of chronology (at Lancaster we have historians of all periods from classical antiquity to the 20th century), geography (we have historians of Britain, several parts of Europe, the Middle East, India, South-East Asia, the Caribbean, and the United States), subject matter (economic, political, social, cultural, etc.), and methodology.  To expect any one historian, no matter how eminent, to be able to judge the quality of research across all these fields is absurd.   At least when outputs are submitted to REF Sub-panels there is a reasonable chance that the person who winds up grading them might have some expertise in or at least knowledge of the field.  The History Sub-panel has 24 members and a further 14 assessors.  Lancaster’s procedure, by contrast, guarantees that decisions as to whether individuals are submitted in the REF at all depend, in most cases, on the recommendation of the same individual, who is very likely not to be technically qualified to evaluate the quality of the work in question.  This is not a criticism of this particular assessor; no one individual could possibly be qualified to assess more than a minority of outputs, given the diversity of the historical research produced within the UoA.  Again, the relevant contrast is with peer reviewers for journals, who are chosen by the editors precisely for their specialist competence to assess a specific manuscript—not for their general eminence in the profession, or their experience of sitting on REF panels.[5]

I find it poignant that so cavalier an attitude toward evaluating the research of colleagues should be adopted in a university that requires external examiners for PhDs to be “an experienced member of another university qualified … to assess the thesis within its own field” and—unlike in North America—also requires all undergraduate work to be both second-marked internally and open to inspection by an external examiner before it can count toward a degree.   Why are those whose very livelihood depends on their research—and its reputation for quality—not given at least equivalent consideration as the students they teach?

A particular casualty of this approach is likely to be research that crosses disciplines—something, in other contexts, both Lancaster University and the Research Councils have been keen to pay lip-service to—or that is otherwise not mainstream (and as such may be cutting-edge).  Given the stakes in the REF, it always pays universities to be risk-averse.   Outputs may be graded below 3* not because their quality is in doubt but because they are thought to be marginal or unconventional with regard to the disciplinary norms of the REF Sub-panel, and what may be the most adventurous researchers excluded from the submission.

Though there is a right of appeal for individuals who feel they have been unfairly treated in terms of application of the procedure, Lancaster’s Code of Practice is crystal clear that: “The decision on the inclusion of staff to the REF is a strategic and qualitative process in which judgements are made about the quality of research of individual members of staff.  The judgements are subjective, based on factual information. Hence, disagreement with the decision alone would not be appropriate grounds for an appeal” (my emphasis).

This is the ultimate Kafkan twist.  The subjectivity of the process of evaluation is admitted, but only as a reason for denying any right of appeal against its decisions on substantive grounds.  These are exactly the grounds, of course, on which most people would want to appeal—not those of sexual or racial discrimination, though it might be argued that the secrecy (aka “confidentiality”) attached to these evaluations—we are not allowed to see external assessors’ comments—would make it impossible to prove discrimination in individual cases anyway.

4.   What comes next?

For whatever reasons, the funding councils for British universities have chosen to allocate that portion of their budget earmarked for research in ways that (at least in the humanities) systematically ignore the normal and well-established international benchmarks for judging the quality of research and publications.  Instead, they have chosen to set up their own panopticon of hand-picked disciplinary experts, whose eminence is not in doubt, but whose ability to provide informed assessments of the vast range of outputs submitted to them may well be questioned.  The vagaries of this approach have been exacerbated in the 2014 REF by putting in place a financial reward regime that incentivizes universities to exclude potential 2* and 1* outputs from submission altogether.  The resulting internal university REF procedures are not only enormously wasteful of time and money that could otherwise be spent on research.  More importantly, they compound the elements of subjectivity and arbitrariness already inherent in the RAE/REF system, ensuring that evaluations of quality on whose basis individuals are excluded from the REF are often not made by subject-matter experts.   Research that crosses disciplinary boundaries or challenges norms may be especially vulnerable in this context, because it is seen as especially “risky.”

Whatever this charade is, it is not a framework for research excellence.  If anything, it is likely to encourage conventional, “safe” research, while actively penalizing risk-taking innovation—above all, where that innovation crosses the disciplinary boundaries entrenched in the REF Sub-panels.  The REF is not even any longer a research assessment exercise, in any meaningful definition of that term, because so much of the assessment is now done within individual universities, in anything but rigorous ways, before it enters the REF proper at all.

Having made clear its intention to exclude from submission in the 2014 REF some of those faculty who would uncontentiously have qualified as “research-active” in previous RAEs, Lancaster University informs us that: “Career progression of staff will not be affected and there will not be any contractual changes or instigation of formal performance management procedures solely on the basis of not being submitted for REF2014.”  I hope the university means what it says.  But it is difficult to ignore the fact that Leicester University’s Pro-VC (as reported in THE) also reiterated that: “the university stands by its previously agreed ‘general principle’ that non-submission to the REF ‘will not, of itself, mean that there will be negative career repercussions for that person,’” even while spelling out his university’s intention to review the contractual positions of all non-submitted staff with a view to putting some on teaching-only contracts and firing others.

The emphasis on the weasel words is mine in both cases.  If universities truly intend that non-submission in the 2014 REF should not in any way negatively affect individuals’ career prospects, then what is to stop them saying so categorically and unambiguously?


[1] This is in part the result of a successful campaign by leading figures and organizations in humanities in the UK against “bibliometrics.”  While I accept that the expectations of journal ranking and citation patterns applicable to the natural sciences cannot simply be transferred wholesale to the humanities, I would also argue that an evaluative procedure that ignores all consideration of whether an output has gone through a prior process of peer review (and if so how rigorous), where it has been published, how it has been received, and how often and by whom it has been cited, throws the baby out with the bathwater.  It also gives an extraordinary intellectual gatekeeping power to those who constitute the REF disciplinary—in all senses—sub-panels, but that is an issue beyond the scope of this post.

[2] REF 2014 Code of Practice Lancaster V4 24 July 2013.  Quoted from LU website, accessed 9 August 2013.  My emphasis on formally.  So far as I can see, what this does is limit legal liability to the VC and a senior advisory committee, while immunizing those who are heavily involved in making the actual assessments of outputs, including, notably, departmental research directors and paid external assessors, against any potential litigation.

[3] I may be wrong on this point, but Faculty-level procedures have never been published—presumably because only the VC and university REF Steering Committee are formally involved in making decisions on who is submitted.

[4] This is the procedure for the History UoA.  Other UoAs at Lancaster may vary in points of detail, for instance in using internal as well as external readers.  I cannot discuss these differences, because these procedures have not been published either.  I doubt the variance would be such as to escape the general criticisms I am advancing here.

[5] Indeed, Lancaster’s Associate Dean (Research) for the Faculty of Science and Technology—where evaluating outputs is arguably easier than in the humanities anyway—has admitted that “some weaknesses in the mock REF exercise are apparent, for example in many cases there was only one external reviewer per department, no doubt with expert knowledge but not in all the relevant areas, who was engaged for a limited time only.”  Michael Koch, SciTech Bulletin #125.

On March 6 Research Councils UK (RCUK), the umbrella organization representing all the major UK public research-funding bodies, published its latest policy document regarding Open Access (OA) publication.[i]   Despite the far-reaching nature of the proposed changes to the academic publication landscape and the many objections that have come from learned societies and other stakeholders in the university sector, the document gives no time for consultation. The policy will come into effect in less than a monthAll “peer-reviewed research papers, which acknowledge Research Council funding, that are submitted for publication after 1 April 2013 and which are published in journals or conference proceedings” must be “OA compliant.”

A journal is considered to be OA compliant either if it “provides, via its own website, immediate and unrestricted access to the final published version of the paper … using the Creative Commons Attribution (CC BY) licence”  (Gold OA), or if it “consents to deposit of the final Accepted Manuscript in any repository, without restriction on non-commercial re-use and within a defined period” (Green OA).  In the case of Gold OA, publishers can charge the author an Article Processing Charge (APC).  With Green OA no APC is paid, but “RCUK will accept a delay of no more than six months between on-line publication and the final Accepted Manuscript becoming Open Access.”  For the humanities and social sciences (HSS), this embargo period will be extended—for the time being—to twelve months.

Lest there be any room for doubt, the document is clear that “Journals which are not compliant with RCUK policy must not be used to publish research papers arising from Research Council funded work” (para. 3.1[iv]). If the top journal in your field does not offer OA options—which it may not do if it is not UK-based—tough luck.

I have commented at length on many of the issues surrounding OA in my response to HEFCE’s call for advice on Open Access and the REF, which I posted on this blog on March 4, 2013.  As I said there, I am not opposed to OA as such, but I regard the way it is being railroaded through in the UK as a serious threat to both the quality of British universities and the academic freedom of researchers.  I shall not repeat those arguments here.  But some additional points might usefully be made.

1.  At present there is no restriction on where RCUK-funded authors may publish, but researchers can build the costs of APCs into grant applications.  Under the new regime not only will RCUK-funded researchers be banned from publishing in non-OA compliant journals; in a major change of policy, “Research grant and fellowship applications with start dates on or after 1st April 2013 are no longer permitted to include provision for Open Access publication or other publication charges in respect of peer-reviewed journal articles and peer-reviewed conference papers.” RCUK will now provide each university or eligible research institution with a “block grant,” from which APCs will be paid.  Each institution is required to establish “institutional publication funds, and the processes to manage and allocate the funds provided.” The document gives no guarantee that levels of funding available will be sufficient to meet demand for APCs, and provides no criteria for rationing publication funds should demand exceed supply.   “Institutions,” the document says, “have the flexibility to use the block grant in the manner they consider will best deliver the RCUK Policy on Open Access in a transparent way that allocates funds fairly across the disciplines.”

Thus all funds to support payments of APCs will be channeled through universities, which can determine how to distribute those funds and where necessary—as it almost certainly always will be—to ration them.  This may lead both to both discrepancies of policy across universities and consequent inequalities of opportunity to publish even among RCUK grant-holders.  It also provides an institutional framework within which criteria other than the quality of papers as judged by peer review will inevitably play an important role in determining whether or not research gets published.  By definition, any University Publications Committee is going to consist largely of people who are not experts in the relevant field, or even drawn from the same or a cognate discipline.  What criteria are they supposed to use to guide their choices?

2.  RCUK now explicitly recommend that “institutions should work with their authors to ensure that a proper market in APCs develops, with price becoming one of the factors that is taken into consideration when deciding where to publish.  HEFCE’s policy on the REF, which puts no weight on the impact value of journals in which papers are published, should be helpful in this respect” (para. 3.5[ii]).  In other words, where funds are tight universities may “encourage” researchers to publish not in the best journals in their field but the cheapest—and the “flexibility” given to universities to manage RCUK publication funds will allow them to reinforce this by withholding APCs from any authors who refuse to comply.  Not only does this risk harming individuals’ careers and the international standing of UK research, in ways that are too obvious to need spelling out here.  It is also an open invitation to cowboy “OA” publishers with no academic standing whatsoever to raid the UK market by offering cut-price outlets.  My mailbox has been full of invitations to publish in such dubious “peer-reviewed” venues already.

3.  “Monographs, books, critical editions, volumes and catalogues” remain exempt from the new RCUK policy, although we are told that: “RCUK encourages authors of such material to consider making them Open Access where possible.”  Before researchers in the arts, humanities, and social sciences heave a collective sigh of relief we might remind ourselves that every wedge has a thin end.  When RCUK first flirted with OA, back in 2005, it was also all about “encouragement.”  Humanists might also note the caveat in the fine print (para. 3.6 [ii]) on embargos within Green OA: the 12-month embargo period for HSS papers, it says, “is only an interim arrangement, and RCUK is working towards enabling a maximum embargo period of six months for all research papers.”

4.  Across the sector, the RCUK “aim is for 75 per cent of Open Access papers from the research we fund to be delivered through immediate, unrestricted, on-line access with maximum opportunities for re-use” (i.e. Gold OA) by the end of a 5-year “transition period.”  It is notable that no rationale is given for why this period should by five years—a target set despite the document’s recognition that much in the OA landscape remains uncertain, especially at the international level.  Should the UK turn out to be out of step with developments elsewhere, especially in continental Europe and North America, such targets for Gold OA may entail soaring costs for APCs in a context in which there has as yet been no compensating fall in journal subscription costs, compounding the financial problems that have underpinned the push toward OA in the first place.  As others have said before, the Gold OA model will only work economically if it is brought in globally.

5.  As it happens, there are already clear indications that the UK is significantly out of step with the United States—by far the most important player in the global academic game.  The Obama Administration’s recently announced OA policy differs from that espoused by RCUK (and HEFCE) in at least two major respects.  First, the form of OA adopted is Green OA, NOT Gold (which is discussed nowhere in the relevant US document!);[iii] second, the standard embargo period suggested is 12 months (as opposed to RCUK’s 6).  The document is explicit that this is a “guideline” that may be varied according to the “timeframe that is appropriate for each type of research” (p.3).

Nature comments: “it is now clear that US public-access policy is taking a different direction from that in the United Kingdom, where government-funded science agencies want authors to pay publishers up front to make their work free to read immediately. This immediate open-access policy involves extra money taken from science budgets to pay publishers.  NSF director Subra Suresh explained to Nature that he could not justify taking money out of basic research to pay for open access at a time when demand for the agency’s funding was high. With both the United States and Europe supporting delayed access to publications, the UK government looks increasingly isolated in its preference for immediate open access.”[iv]

6.  Finally, the US statement is also far more concerned with protecting the intellectual property rights of authors against the risks of abuse that some have argued are inherent in the CC-BY License, and explicitly charges research funding agencies to come up with plans “to help prevent the unauthorized mass redistribution of scholarly publications” (p. 3). Notwithstanding its acceptance that CC-BY may “more easily enabl[e] misattribution, misquoting, misrepresentation, plagiarism, or otherwise referencing materials out of context, which may be damaging to the interests of authors” (para. 3.7[iii]), RCUK remains committed to its introduction for (eventually) 75% of the papers resulting from RCUK support.

The White House has made clear that “The Obama Administration is committed to the proposition that citizens deserve easy access to the results of scientific research their tax dollars have paid for.”[v]   I have no quarrel with that proposition.  But to argue that just because university research is publicly funded it should therefore be made immediately and freely available for anybody to use more or less as they wish is a non sequitur.  It is rather like arguing that because government subsidizes the arts, all operas, concerts, and exhibitions should be free—or because the BBC is entirely funded by taxpayers’ money, anybody should be free to duplicate and use its TV and radio programs for whatever purpose they want.  Were we talking about films or music, of course, RCUK’s “Open Access” would be regarded as a charter for piracy.


[i] RCUK Policy on Open Access and Supporting Guidance, available at: http://www.rcuk.ac.uk/documents/documents/RCUKOpenAccessPolicyandRevisedguidance.pdf.  All quotations from this source unless otherwise noted.

[ii] More on Open Access: HEFCE brings out the big REF stick, available at: https://coastsofbohemia.com/2013/03/04/more-on-open-access-hefce-brings-out-the-big-ref-stick/

[iii] Executive Office of the President, MEMORANDUM FOR THE HEADS OF EXECUTIVE DEPARTMENTS AND AGENCIES, 22 February 2013, available at http://www.whitehouse.gov/sites/default/files/microsites/ostp/ostp_public_access_memo_2013.pdf

[iv] White House announces new Open Access Policy, available at: http://blogs.nature.com/news/2013/02/us-white-house-announces-open-access-policy.html

[v] Ibid.

HEFCE (the Higher Education Funding Council for England), the body that funds and oversees English universities, has asked for responses for its proposals to allow only papers that meet its criteria for “Open Access” to be submitted to the next Research Assessment Exercise (REF), the periodic review that determines how much research funding each university receives.  Here is my response:

Open Access and post-2014 REF

1.  I am not opposed to Open Access (OA) in principle, and I can see the long-term benefits of its universal adoption within academic publishing—at least for scientific articles and papers.  The argument is less convincing when it comes to books.  But it is very far from clear to me how the UK can benefit from unilaterally moving to OA outside the framework of an international agreement, when the market for academic research is a global one.  Against this background, I believe the proposed routes and timetables for OA adopted by the UK government, RCUK, and HEFCE are dangerous for British academia, especially in the humanities and social sciences.  In particular, I believe that the proposal to use the REF as a disciplinary tool for achieving OA aims is a huge mistake that could have appalling long-term consequences.

2. Following the Finch Report, you argue “in the long term, the gold rather than green route may be the most sustainable way to deliver open access.” In the present funding context, the gold route (in which author pays APC, output is immediately available to public) has serious disadvantages.  At an estimated £1500 per article, few academics will be able to afford to pay APCs themselves.  They will therefore depend upon their institutions to do it for them.  HEFCE has made clear that there will be no additional funds given to universities to cover these costs.  It seems extremely unlikely that there will be funds available to cover APCs for all articles produced in British universities and accepted for publication.  Some will therefore not now be published, or at least not published in venues admissible for the REF (unless they are published as green OA).  Far from the products of research in British universities being more easily available to the public, therefore, some proportion of those products may not now be available to the public at all—not for reasons of quality, but for reasons of cost.  I don’t see how this is of advantage to anybody: funders, researchers, or consumers.

3.  In some ways even more disturbingly, so long as funds are not available within all universities fully to support the costs of APCs for all researchers, there will have to be some rationing mechanism developed for the use of such funds as there are.  You do not have to be an Einstein to imagine the viciousness of the dogfights over these funds—between universities, between disciplines, between colleagues—that are likely to result.  Nor does it require much imagination to identify the likely losers: early career scholars, especially those on sessional contracts, retired faculty, individuals working within lower-ranked and worse funded universities.  More generally, what gets published will now be susceptible to all the factors that determine the allocation of budgets between and within universities, including disciplinary hierarchies, university managers’ strategic priorities, institutional politics, and personal rancor.  It might be worth mentioning here that the four papers Albert Einstein published in 1905, which by common consent laid the foundations of modern physics, would likely never have seen print under gold OA: he did not have a university post at the time, but worked as a (not very well paid) clerk in the patent office in Berne.

4.  Notwithstanding the current inequalities between institutions, hitherto in the UK a researcher’s chance of his or her paper being published has depended entirely upon journals’ processes of peer review.  Under these proposals, not only will the range of publication venues be narrowed by HEFCE and RCUK—in ways that could impact very negatively on individuals’ careers if leading international journals published outside the UK do not go down the OA route.  Universities will be the gatekeepers to the funds a faculty member needs in order to be able to afford to publish his or her work at all in venues approved by HEFCE and RCUK.  It is here that tying the REF to OA is most dangerous, because not being submitted in the REF—whether because of having published in “the wrong place,” or because a university was unwilling or unable to fund the APC—may cost researchers promotion or even, in the extreme case, their jobs.  Given the importance of publication at every stage of an academic career, it is difficult to conceive of a more serious threat to academic freedom.

5.  For all the reasons given above, I believe that green OA (materials deposited in an institutional repository and made freely available after an embargo period) is much preferable to gold.  However, I am not sure, in the long run, that the embargo period central to green OA is workable.  If it is too long, funders won’t accept it as true OA.  If it is too short, the risk is that libraries will not continue to pay subscriptions for journals whose contents will become freely available online within a year or two anyway.  Here differences between disciplines become crucial.  In the sciences a two-year embargo will usually be well into, if not well past, an article’s “half-life.”  In the humanities, where the typical wait for a journal article to be published is two years, it will only just be beginning to be cited at the point it comes off embargo.  Green OA avoids the patent inequities and threats to academic freedom that accompany gold in the UK funding context.  It runs the risk, however, that many journals, especially in the humanities, may be driven out of business if embargo periods are too short, with a consequent further restriction of opportunities for academic publication.  This is most likely to affect smaller, independent journals published by learned societies (thereby jeopardizing funding for those societies’ other activities).  It is also likely to inhibit the emergence of new journals, to the academic community’s detriment.

6.  The advantages of OA are most obvious for the natural sciences, where the paper (often short, often multi-authored) is the most common vehicle of publication, the half-life of papers is relatively short, and journal subscription costs are high.  But none of these conditions obtain in large areas of the humanities and some areas of the social sciences, where books are equally common vehicles of publication, the half-life of publications is much longer, and subscriptions are generally cheaper.   In History, the monograph—generally single authored—remains the “gold standard” of research publication, while chapters in edited collections are as common as journal articles.  I do not accept the argument that “research in all subjects has equal importance and therefore equally merits receiving the benefits of open-access publication.”  This would be true if and only if the form of OA adopted takes into account the requirements of the relevant disciplines.  The proposals set out in Open Access and Submissions to the Research Excellence Framework post-2014 fail to meet this test.  In mandating a model tailored to the publication requirements of the natural sciences, you risk seriously jeopardizing publication opportunities in other disciplines.

7.  Where this refusal to take sufficient account of disciplinary differences is clearest is in the paper’s woefully inadequate discussion of monographs.  You recognize that “there may be some exceptions during this transitional period” (para. 17), of which the monograph is one.  You express hopes that OA will proceed more gradually with regard to monographs (para. 22), while recognizing that “we are at present some way from a robust and generally applicable approach to open-access publication for monographs.”  You ask for advice on whether this anomaly is best handled by treating the monograph as an exemption or “specifying that a given percentage (for example, 80 per cent) of all outputs submitted by an institution meet the requirement [of OA compliance].”  I would emphatically reject the latter: any such quantification is wholly arbitrary, and ignores variations in disciplinary mix across institutions.  But what I find more disturbing is the presumption that in monographs as in papers OA is the inevitable and desirable future.  My own book The Coasts of Bohemia, published in 1998, has now sold over 14,000 copies.  I very much doubt it would have done so had it not had the distribution and publicity machinery of Princeton University Press behind it.  Simply to dump a text in a repository is not, in and of itself, to widen public access to the products of academic research.  In the humanities, at any rate, if OA drives such presses out of business, it will be the public that is the loser because our writings will be languishing in repositories, to be read only by specialists, instead of being actively marketed by knowledgeable and committed publishing houses.

8.  Finally, I believe there is disingenuousness right at the heart of this proposal.  The REF purports to be an exercise that assesses the quality of research being produced in UK universities, and as such determines QR funding.  It has been a mantra of the REF in all its previous indications that quality is evaluated independently of venue of publication—were it not, panels would not have to read outputs and evaluation could all be done on the basis of bibliometrics alone.  HEFCE is now proposing to use the REF for another purpose entirely, that of furthering the cause of Open Access.  Irrespective of the intrinsic merits of the latter, to exclude from the REF all research that is not published in OA-compliant journals, no matter how excellent it is, or how internationally eminent the venue in which it appears, is in flagrant contradiction to the stated aims of the REF itself.   Instead of a framework intended to facilitate research excellence it threatens to become a disciplinary tool designed to force academics to publish their work not in the best or most appropriate international venues for their discipline, but in venues that advance an unrelated political agenda.  This is a bullying and shortsighted travesty of everything HEFCE stands for.

Derek Sayer, FRHistS, FRSC

Professor of Cultural History, Lancaster University

Professor Emeritus (Canada Research Chair), University of Alberta

March 3, 2013

The HEFCE document to which I am responding may be accessed at: http://www.hefce.ac.uk/news/newsarchive/2013/name,78750,en.html

“After 1905, Einstein’s miraculous year, physics would never be the same again. In those twelve months, Einstein shattered many cherished scientific beliefs with five extraordinary papers that would establish him as the world’s leading physicist … The best-known papers are the two that founded special relativity: On the Electrodynamics of Moving Bodies and Does the Inertia of a Body Depend on Its Energy Content? In the former, Einstein showed that absolute time had to be replaced by a new absolute: the speed of light. In the second, he asserted the equivalence of mass and energy, which would lead to the famous formula E = mc2.

On a Heuristic Point of View Concerning the Production and Transformation of Light … challenged the wave theory of light, suggesting that light could also be regarded as a collection of particles. This helped to open the door to a whole new world—that of quantum physics. For ideas in this paper, [Einstein] won the Nobel Prize in 1921.

The fourth paper also led to a Nobel Prize, although for another scientist, Jean Perrin. On the Movement of Small Particles Suspended in Stationary Liquids Required by the Molecular-Kinetic Theory of Heat concerns the Brownian motion of such particles … which Perrin [later] confirmed experimentally. The fifth paper, A New Determination of Molecular Dimensions, was Einstein’s doctoral dissertation, and remains among his most cited articles. It shows how to calculate Avogadro’s number and the size of molecules.

These papers … [are] among the high points of human achievement and marks a watershed in the history of science.”

From Einstein’s Miraculous Year: Five Papers That Changed the Face of Physics (Edited and introduced by John Stachel), Princeton University Press 2005.

http://press.princeton.edu/titles/6272.html

*

Recent events in Britain threaten to change the landscape of academic publishing–fundamentally.  Flying the flag of “Open Access,” the UK government has accepted the 2012 Finch commission’s recommendation that all papers arising from publicly funded research should henceforth be available to the taxpaying public free of charge.  To achieve this the so-called “gold” model of open access will be used.

Currently most scientific and scholarly journals charge their authors nothing to publish, and make their money wholly through subscriptions, mostly to university and other institutional libraries.  Paying to publish is viewed with suspicion, as the hallmark of a “vanity press.”  But readers have to be members of a subscribing library to access these journals (even online) free of charge.  In this sense the publications arising out of most academic research, even where that research is publicly funded, remain behind a “paywall.”  On the proposed gold model of open access, journals will make their contents freely and immediately available online to all, and recoup their costs by charging authors an “article processing charge” (APC).  Both Finch and the government rejected an alternative “green” open-access model, in which there is no APC but papers are made freely available to the public after an embargo period of two-to-three years.  This green model is favored by most British learned societies in the humanities and social sciences, whose vigorous protestations have so far been ignored.[i]

While it is accepted that there will be a period of transition before all UK-based journals have become “open-access compliant”—to say nothing of journals (the majority) published in other countries—the government has made it clear that it wishes to see Britain taking the lead in the move to open access.  To that end, RCUK—the most important funding body for academic research in the UK—has already instituted (effective April 2013) “a requirement that results arising from their funding are published only in journals that are compliant with Research Council policy on Open Access. Authors will therefore be expected to select from among such journals when choosing where to publish their research.”[ii]   HEFCE—the Higher Education Funding Council for England, which provides so-called QR funding to individual English universities on the basis of their performance in the periodic assessment exercises known as the Research Excellence Framework (REF)—has also announced that it intends implementing “a requirement that research outputs submitted to any future Research Excellence Framework (REF) should be as widely accessible as possible at the time.”[iii]  This is an extremely powerful mechanism for ensuring that individual faculty members publish in (gold) open-access venues, because exclusion from a university’s REF submission will blight their prospects for career advancement and could even cost them their job.

The average APC, according to the Finch report, will work out at around £1500 (US$2367) per paper—far more than the average faculty member can afford.  “Publication funds” will be set up within universities to cover costs of APCs, though it has been made clear that these will come out of the existing research budget.  Given the state of public finances there is no chance that the funds available will remotely cover the range of papers that come out of UK universities.  You do not have to be an Einstein to imagine the viciousness of the inevitable coming dogfight over these funds—between universities, between disciplines, between colleagues.  Nor does it require much imagination to identify the probable losers: early career scholars, especially those on sessional contracts, retired faculty, individuals working within the lower-ranked and worse funded schools.  Likely the humanities and social sciences, to the benefit of science and engineering.  And, quite possibly, anyone who fails to ingratiate themselves with their colleagues or makes a habit of getting up the noses of their superiors.

Notwithstanding the great inequalities between institutions—including huge differences in teaching loads (and therefore time available for research) and quality of library provision—a researcher’s chance of his or her paper being published has up till now depended entirely upon the journal’s processes of peer review.  If it meets a journal’s standards of quality it will get into print. Under this gold model of open access, not only will the range of publication venues be controlled (and if other countries do not follow the UK’s lead, severely restricted) by HEFCE and RCUK, disadvantaging UK researchers against colleagues elsewhere.  The universities–our employers–will now be the gatekeepers to the funds a faculty member needs in order to be able to afford to publish his or her work in HEFCE and RCUK-approved venues at all.  Given the importance of publication at every stage of an academic career, it is difficult to conceive of a more potent disciplinary mechanism—or a more serious threat to academic freedom.

One can see the attractions of enserfing academics, both for governments and university administrators.  It has only one drawback for any country that values research excellence:

ALBERT EINSTEIN DID NOT HAVE A UNIVERSITY POSITION DURING HIS “MIRACULOUS YEAR.”  IN 1905 HE WAS WORKING AS A CLERK IN THE PATENT OFFICE IN BERN.