Starting next year, universities have to prove their research has real-world impact
- Written by Pauline Zardo, Data & Policy Research Fellow, Queensland University of Technology
Starting in 2018, Australian universities will be required to prove their research provides concrete benefits for taxpayers and the government, who fund it.
Education Minister Simon Birmingham recently announced the Australian Research Council (ARC) will introduce an Engagement and Impact Assessment. It will run alongside the current Excellence in Research Australia ERA assessment exercise. This follows a pilot of the Engagement and Impact Assessment, run in 2017.
Read more: Pilot study on why academics should engage with others in the community
Until now, research performance assessment has mostly been focused on the number of publications, citations and competitive grants won. This new metric changes the focus from inputs and outputs to outcomes. This is part of a continuing shift from quantity to quality, which began in earlier iterations of the ERA. The Engagement and Impact assessment reflects a significant change in thinking about the types of research impact we value and why.
For research to have an impact, it needs to be used or applied in some way. For example, health research aims to have an impact on health outcomes. For that to happen doctors, nurses and people working in health policy would need to use that research evidence in their practice or policy decision-making.
Despite the initial focus on commercial outcomes, the Engagement and Impact Assessment has evolved to include a range of impact types. It provides an important incentive for researchers in all fields to think about how to engage those outside of academia who can translate their research into real-world impacts. It also enables researchers who were already engaging with research end-users and delivering positive impact to have these outcomes formally recognised for the first time at a national level.
Community input
Including an engagement component recognises researchers are not in direct control of whether their research will actually be used. Industry, government and the community also have an important role in making sure the potential benefits of research are achieved.
The engagement metrics allow universities to demonstrate and be rewarded for engaging industry, government and others in research, even if it doesn’t directly or immediately lead to impact. Case studies were chosen to demonstrate impact because they let researchers describe the important impacts they are achieving that metrics can’t capture.
The case studies will need to include the impact achieved, the beneficiaries and timeframe of the research impact and countries where the impact occurred. They’ll also include what strategies were employed to enable translation of research into real world benefits.
The results will be assessed by a panel of experts for each field of research who will provide a rating of engagement and impact as low, medium or high.
Cultural impacts
The ARC has defined engagement as:
the interaction between researchers and research end-users outside of academia, for the mutually beneficial transfer of knowledge, technologies, methods or resources.
Impact has been defined as:
the contribution that research makes to economy, society and environment and culture beyond the contribution to academic research.
Read more: When measuring research, we must remember that ‘engagement’ and ‘impact’ are not the same thing
The definition of impact has been amended to include “culture”, which was not part of the definition applied in the pilot. This amendment speaks to concerns raised by the academic community around quantifying and qualifying impacts that vary significantly across different academic fields. It’s hard to compare, for example, the impact of an historic exhibition to the impact of astrophysics research on gravitational waves.
It’s also difficult to compare more basic or experimental research with applied research, such as health and well-being programs that can be directly applied in the community. Basic or experimental research can take a long time to lead to a measurable impact.
Classic examples of experimental research that had significant economic, health and social impacts that it didn’t specifically set out to achieve are the discovery of penicillin, and WiFi.
An addition, not a replacement
The traditional research metrics of grants, publication and citation, which work for basic, experimental and longer-time-to-impact research, are still in play. The Engagement and Impact Assessment has not been tied to funding decisions at this stage.
Read more: Explainer: how and why is research assessed?
A study of the impact case studies submitted to the UK’s Research Excellence Framework found high-impact scores were correlated to high quality scores. They concluded “impact was not being achieved at the expense of research excellence”. Previous research has shown research quality is an important enabler of the use of research.
Engagement and impact outcomes for a specific field of research at one university will be assessed against the same field at another university. This is also the case with traditional metrics and grants assessment.
Engagement will be assessed on four key metrics and an engagement narrative. These metrics are focused on funding provided by end-users of research such as businesses or individuals outside the world of academia who directly use or benefit from the the research.
The four metrics are: cash support (against Higher Education Research Data Collection categories) or sponsored grants from end-users, research commercialisation income and how much income is made per researcher.
The engagement narrative will enable universities to provide detail about how they are engaging with end-users. There is also a list of other engagement indicators universities can draw on to describe their engagement activity.
At times, the value of research has been publicly questioned. The Engagement and Impact Assessment will help the general public better understand the value of the research they fund.
Authors: Pauline Zardo, Data & Policy Research Fellow, Queensland University of Technology