Daily Bulletin

  • Written by The Conversation Contributor
imageERA 2015ARC

The Excellence in Research for Australia (ERA) exercise conducted by the Australian Research Council (ARC) purports to benchmark Australian university research against the research produced by the rest of the world. It is supposed to give everyone an objective measure of research quality and quantity of the Australian higher education system and of more direct relevance, it determines part of the Australian Government’s research block grant allocation.

Taking place every 3 years, the collection and analysis of research data by the universities and the ARC is a massive undertaking. Each university will have teams of staff organising the collection over the space of 6 to 12 months. Academic staff internally are used to vet and sort through the submission. 1,300 academics are involved in peer reviewing submissions and finally, 155 academics are co-opted onto panels to evaluate and rate the submissions alongside the ARC.

The financial cost to the universities of this collection process is in the tens of millions of dollars. So the fundamental question to ask has to be whether all of this activity and money was worth the effort?

Answering this question depends firstly on whether you can actually believe the results and secondly what those results actually mean in practice. This last point is critically important because as we will see later, the data has already been wildly misinterpreted by the media to create a very false picture on the research landscape in Australia.

Since the evaluation in 2012, according to the data from the ERA, 29% of all Australian research improved its rating. It is worth remembering that ERA 2015 includes 3 years of the same data as submitted in 2012, so this increase in quality is all the more impressive given that it mainly came from the research done in 2011 - 2013.

An alternative view to explain the improvement in results however is that universities simply got better at presenting their submissions to the ARC.

The ARC claims that the purpose of the ERA is not to create a ranking of institutions or a league table. This is because it is very difficult to make these comparisons between very different submission profiles. However, the ERA is used in part, to determine the quantity of research funding that universities get and so clearly at some point, universities are being put into a list with those at the top receiving more than those at the bottom.

The dangers of doing this ranking were highlighted by the “analysis” of university performance carried out and published by The Australian. Their method of ranking the universities relied on giving each university a score by taking the average of scores in each FoR and then taking an average of those averages. Leaving aside the fact that an average of averages is not the same as an average of the underlying data (which is what they might have done), the situation was further compounded by the fact that they did not include the ratings at the overall FoR category level (2 digit) unless a university had not received a rating within that category at the 4 digit level.

Complicated? Yes, but completely flawed because their process left out the scores given to a raft of publications. It skewed smaller universities up and larger universities down. It led to outlandish claims that universities with smaller research activities were outperforming the larger ones.

The ERA analysis is particularly complicated and has many nuances where mistakes in submissions can be made and the submissions can be optimised. The ERA 2015 introduced a new outcome from the assessment which was ‘n/r’ which the ARC explained as “In some cases UoEs were not rated ( ‘n/r’) by the Research Evaluation Committee due to coding issues.” 29 submissions were not rated this time around. Although we can’t know the exact details of why there were “coding” problems that made these FoRs unrateable, the fact that they existed at all highlights the fact that data decisions are taken by the submitting institution as well as by the ARC. These n/r scores were simply ignored by The Australian.

What the analysis of The Australian highlighted was that unless you had a real understanding of how the data are collected and analysed, there is really very little you can say comparing one university to another.

There are better ways of assessing research quality from the universities that would not involve the universities themselves in the process, and many international rankings of universities purport to do this via a variety of methods. At least for the metric evaluated disciplines, the ARC could conduct a citation analysis without involving the universities in deciding what categories the papers fit into. Until then, the data should be interpreted with caution and any direct comparisons between universities largely avoided.

Disclosure

David Glance works for the University of Western Australia that took part in the ERA and was mentioned in the article in The Australian

Authors: The Conversation Contributor

Read more http://theconversation.com/the-era-assessed-cost-not-rated-and-league-tables-is-there-a-better-way-to-do-it-51865

Business News

A Guide to Finance Automation Software

When running a business, it is critical to streamline certain processes to maintain efficiency. Too much to spent manually on tasks can wind up being detrimental to the overall health of the organis...

Daily Bulletin - avatar Daily Bulletin

Top Tips for Cost-effective Storefront Signage

The retail industry is highly competitive and if you are in the process of setting up a retail store, you have come to the right place, as we offer a few tips to help you create a stunning storefront...

Daily Bulletin - avatar Daily Bulletin

How Freight Forwarding Simplifies Global Trade Operations

Global trade operations are becoming increasingly complex due to international regulations, customs procedures, and the sheer scale of global logistics. For businesses looking to expand internation...

Daily Bulletin - avatar Daily Bulletin