Ever wondered how newspapers found enough time to read all the reports published each day by think tanks, lobbyists, and unions before writing articles about them? It turns out — and you will be shocked to discover this — that most of those reports don’t get read. Instead, the journalists dutifully copy and paste the media release sent out accompanying them…
… some of this might be better in a different post that I’ve got cooking up as a result from Monday night’s MediaWatch. I think MediaWatch has again unquestioningly helped itself to a bunch of assumptions, and that the most interesting parts of the discussion are buried within those assumptions. Even so, you would imagine that, if a journalist is sent a report attached to a media release, the journalist would read the report before plagiarising from the media release and adding their byline.
Let’s have a look at the National Tertiary Education Union’s new report regarding the Excellence in Research for Australia scheme.
In 2012, the National Tertiary Education Union (NTEU) undertook an exploratory, multi-method study about the implications of the Excellence in Research Australia (ERA) exercise and measures of research performance for Australian university staff. This study relied upon research conducted between April and September 2012, involving:
• A national survey of 39 senior research administrators about the use of ERA and the use of indicators of research performance at universities around Australia;
• Eight recorded and de-identified focus groups at four institutions, and eleven recorded and non-recorded in-depth, semi-structured interviews with participants from five institutions, the recorded sample totalling 50 participants; and
• An NTEU workshop in Melbourne that focused upon the experiences of 35 Early Career Researchers (ECR) and Academics (ECA). [page 3]
For me, it sort of feels like the method (which they will later call their ‘methodology’) doesn’t quite match the proposed task (studying ‘the implications of the ERA exercise and measures of research performance for Australian university staff’). What it looks like is that you will be able to see to what extent people understand and agree with the ERA.
Lo and behold, that’s what they discovered:
Amongst participants there was a wide divergence of opinions about what the ERA was meant to do, whether it was fulfilling those purposes, and the extent to which it generated public benefit. In contrast to DIISRTE’s claims that there is a ‘broad acceptance of ERA as a rigorous method’ by tertiary education stakeholders, many participants had fundamental concerns with the ERA methodology. In any case, it is important to highlight that the ERA was not universally considered a poor assessment instrument.
Many academics and researchers expressed deep concerns about whether the ERA process was inclusive of different kinds of research output, and the implications for publishing in journals not included in the 2012 ERA Journal List (such as foreign language journals). Some expressed concerns about whether the ERA process was inclusive of non-traditional outputs, such as creative outputs, and other esteem indicators (such as editing non A* ranked journals). In nearly every focus group, an assumption was made was that the ERA process had a disproportionate journal emphasis, and disadvantaged particular disciplines and kinds of research.
In part, this can be contextualised by the fact many academics and researchers had poor access to information about the ERA. However, barriers to information also led many researchers to conflate the ERA with practices and behaviours at various institutions that have either sought to manipulate the ERA outcomes or drawn upon ERA indicators in the performance management of staff. This includes the formal use of the ERA journal rankings over a year after they were abandoned and in spite of some efforts by the ARC to ensure they are no longer used. In limited instances the ERA and the ARC were blamed for institutional research performance practices in fact unrelated to the ERA. [page 17]
This finding, by the way, completely slipped the author’s mind when it came time to writing the executive summary.
The study found that whatever concerns university staff have had about the robustness and probity of the ERA instrument, a pressing concern is what the ERA will be ultimately used for, as the ERA has been integrated into more intense modes of university managerialism. Furthermore, greater investment through the Sustainable Research Excellence (SRE) program will have important implications for Australian universities as workplaces integral to the creation of new knowledge, as was perceived by participants in this study. The policy settings underpinning the ERA and its use directly impact upon the confidence of university researchers in the ERA, as well as their very capacity to undertake broad and diverse types of research. [page 3]
If you make the effort to parse that paragraph, you’ll discover that the statement is in no way supported by the findings in the rest of the report. The report equivocates on ‘People being interviewed told us that X’ and ‘X is the case’. Indeed, the earlier point about there being broad confusion about the ERA shows that you can’t draw any firm conclusions about the ERA as a tool.
This problem leaks throughout the report, and the lack of an analytical approach means that the error snowballs.
For example, the report complains that the ERA process does not require universities to consult with the researcher involved about the classification of their research:
It is notable then that the 2010 and 2012 ERA Submission Guidelines exclude any stated obligation for institutions to engage or inform university researchers, even though it is they who are apportioned to FoR codes in institutional submissions and it is their research that is being assessed. [page 22]
The (correct, by the way) claim is made again later on:
As already stated, there is no formal expectation that staff engagement will occur in relation to submission of research output, apportionment or assignment of FoR codes. [page 27]
In this latter instance, it’s insinuated that this lack of engagement is for dodgy reasons:
As already stated, there is no formal expectation that staff engagement will occur in relation to submission of research output, apportionment or assignment of FoR codes. This may provide some protection for staff against allegations of criminal conduct but this would also mean that the capacity to ensure probity about an institution’s submission is significantly diminished. [ibid]
The problem is that the NTEU starts to add fuel to the phony fire between academics and administrators.
It is apparent that there is in fact a fine line between an administrator providing counsel to ensure researchers maximise the university’s ERA score and researchers being improperly directed to assign inaccurate FoR codes. One Level B researcher would have preferred access to her record of research output to ensure that her school was benefiting from her publication record. [page 28]
The Level B researcher never explains for what purpose she should access the ERA submission. The report strongly suggests that these poor researchers are hardly done by due to these nasty administrators compiling the submission but never provides any argument as to why.
Let’s go through what’s being discussed:
The ERA is like a massive game of Dungeons & Dragons. The ARC releases a stack of rules by which all universities must abide. The end goal is to work out whether or not Australian universities are any good at conducting research. Research itself has a very specific definition. Furthermore, the goal is to work out where our research strengths are. We can — and should — debate what we mean by these measures. How will we know if Australian research is any good?
It’s not, mind, about whether the universities are any good at teaching. Or pumping out artworks. Or training first rate lawyers. Or putting on orchestral pieces for the local cognoscenti. None of that counts. We’re talking about research:
The creation of new knowledge and/or the use of existing knowledge in a new and creative way so as to generate new concepts, methodologies and understandings. This could include synthesis and analysis of previous research to the extent that it leads to new and creative outcomes. [Source]
But that’s the puzzle. A whole lot of Australian ‘researchers’ aren’t pumping out ‘research’ consistent with that definition.
And they get really upset by that.
Some expressed concerns about whether the ERA process was inclusive of non-traditional outputs, such as creative outputs, and other esteem indicators (such as editing non A* ranked journals). In nearly every focus group, an assumption was made was that the ERA process had a disproportionate journal emphasis, and disadvantaged particular disciplines and kinds of research. [page 17]
So some academics are concerned that a process to assess the quality of Australian research doesn’t take into consideration things which aren’t research. Crazily enough, the ERA has already got a wide number of concessions to non-research activities (esteem measures).
There are two approaches that the ARC could take. Every university in the country could submit all of its published works to the ARC and they could employ an army of public servants to sift through the publications, assess their eligibility, code them, and then put together batches for independent assessment. Or — hold on to your hats, people, this is going to get controversial — they could get universities to manage all that crap instead.
I do see it as a problem that the government is increasingly externalising its processes. Instead of paying for more public servants, they force universities to hire more administrators, which in turn means that universities can’t hire more researchers, which means the government has to provide more money to universities, &c., &c., &c.
On the other hand, getting universities to manage the process does have its benefits. They have all the systems and knowledge to put together the submission more efficiently than the government can.
But the moment you externalise the process, you allow a problem to enter the system: universities have an interest in presenting the best possible submission that they can, and so they will polish the brightest bits and try to hide the darker bits. Thus, you need a Dungeon Master’s Compendium of Rules to make sure there’s a level playing field.
The NTEU report intimates that this is a terrible thing. How dare universities attempt to provide the best possible submission? Here were the NTEU’s three complaints:
‘Appointing research stars’ – efforts by universities to lift the overall institution or a school’s ERA submission by employing highly productive ‘research stars’, sometimes on a part-time or adjunct basis.
‘Disappearing research’ – where a research manager or senior administrator would ensure certain research outputs would not be evaluated, by apportioning them to FoR codes that did not met the low volume threshold, and without input from the relevant eligible researchers. Sometimes, this would occur through ‘directed manipulation’ with senior administrators directing or advising managers to not enter research, or researchers not to submit output.
‘Horse-trading’ research outputs which involved discussions between university administrators and decision makers to ‘hive-off’ research, i.e. discussing how certain research outputs would be assigned or apportioned multidisciplinary FoR codes to maximise ERA scores in particular institution-preferred FoR codes. [page 26]
The first one is ridiculous. How the NTEU could be horrified that universities were employing people who could achieve higher levels of research output, I’m not sure. Perhaps it’s a union thing that they don’t like people who make members look lazy?
The second and third claims are actually the same point. But let’s put it a different way. I write the most amazing book on the intersection of legal theory and Batman. I code it 180122 (Legal theory) and 200213 (Batman studies). I’m employed by Miskatonic University and their administrators look at my most amazing book and think: ‘We don’t have anybody publishing in Batmanology except for Mark, and we really need more publications in legal theory. We’ll assign it to be 100% legal theory.’
Nothing dishonest has happened here, but NTEU is suggesting that it has. What’s more interesting is that NTEU suggests that the university should be negotiating with me personally about the classification of my work, but the classification only makes sense if I know what everybody else is publishing. I’m sure NTEU would have a fit if everybody’s publication activity were handed out to everybody regardless of need to know.
This is because the NTEU only pays lip service to the ERA’s purpose as an institution-level metric. It is trying to argue that individual academics have an interest in the ERA submission, but that’s not true.
To sum up:
- The NTEU makes conclusions not supported by the evidence. Interviewing people about their perceptions won’t tell you what impacts the ERA is having on university management (especially when you bury the fact that the people being interviewed had a poor understanding of the ERA process);
- The NTEU conclusions (if valid) only make sense within the framework of particular assumptions about the purpose of the ERA. Should the ERA assess a broader range of activities than just research? If so, why?
- The NTEU complains that universities are hiring people who are capable of greater levels of research output. Universities should be hiring people capable of higher levels of research output;
- The NTEU complains that universities are ‘manipulating’ results but does not provide an argument for why this ‘manipulation’ is in any way a problem.
Frankly, it looks like the NTEU is trying to argue against any sort of metric for universities. There are suggestions that applications for research grants is providing its own measure of research performance (page 36), but the NTEU stops short of endorsing an ‘invisible hand’ approach because, well, a lot more academics would lose their jobs if you based it just on performance in research applications.
The best piece of advice I received regarding the ERA was from a very senior academic who had passed through several different assessment schemes: treat academia like a real job and do your best. Neither the ERA nor the ERA journal rankings showed anything that people didn’t already expect. We already knew which journals were the good ones. We already knew which law schools, philosophy departments, &c., &.c., were the good ones and which were less so. We just needed some ‘objective’ way of demonstrating it. If an academic is doing a great job and publishing as they should be, the institution’s ERA process should take care of the rest. The whinging tends to come disproportionately from people who are worried about demonstrating their research performance to others.
That’s not to say that the ERA process is perfect. Classics doesn’t have an FOR code, so we should expect to see classics schools wiped out by the ERA (because it will be hard for them to show how they generate research income). That’s just one example of many systemic problems.
If only the NTEU had been analytical, its report would have been able to show these extremely problematic systemic problems. Instead, it went down the easy path of having a sook.