Daily Bulletin

The Times Real Estate

.

  • Written by The Conversation Contributor
imageFunding panels have to sift through reams of high quality applications, and ultimately reject most.Shutterstock

“Why wasn’t my grant funded?”

Given most research funding agencies have success rates of 20% or less, this is a question that is asked by the majority of applicants every year. Often the only answer members of assessment panels are allowed to give is unsatisfyingly circular: because the application wasn’t ranked highly enough.

But how do such panels make their decisions? Here’s how it’s supposed to work.

Most panels consist of ten or more experienced researchers with expertise related to the applications they will consider. They are usually given about 100 applications to read, each of 50 pages or more.

A primary spokesperson will lead the discussion at the meeting. Sometimes a secondary and tertiary spokesperson are also added to balance the comments of the primary spokesperson and explore issues that may have been missed. Consequently, each panel member will be paying special attention to between 20 and 30 grants.

Grants may also be sent out to two or more discipline-specific reviewers, who will send back reports on the merits of the projects and views on the quality of the researchers.

These reports are discussed together with the grants when the panel comes together at the week-long assessment meeting.

The primary spokesperson summarises the strong and weak points of the project, its importance and feasibility related to the achievements of the research team. After the secondary and perhaps tertiary spokesperson have also provided comments, there is a general discussion.

Then the Chair calls for scores. The spokespeople will often declare their scores but the rest of the panel may vote in secret. There is sometimes a proviso that anyone who wishes to score away from the consensus should declare their score. This serves to limit any extreme views that may emerge without discussion.

The scores are then tallied and grants are ranked.

At the end, all the scores are reviewed and the panel considers whether it has scored consistently throughout the week and makes adjustments if necessary. This is important, as sometimes the scores are tougher at the beginning when people tend to give moderate scores because no one wants to appear too radical. Sometimes a few grants from previous years are considered first to calibrate the panel.

There is always a concern that personal connections may have, or will be perceived to have, influenced the process, so most panels are scrupulous in excluding from the room anyone who has any connection with researchers whose work is being ranked. One can never totally eliminate “who-you-know” biases, but panels try very hard to do just that.

What actually happens

In my experience sitting on various panels, I read all the grants and I am amazed at the quality. I shouldn’t be. These are grants from people who have been successful at every stage of their careers, and the documents have usually been honed further by advice from senior mentors or even internal grant polishing teams made up of academic colleagues.

When I look at the ten grants for which I am primary spokesperson, my heart sinks. I realise that only two or perhaps three of these grants will get funded. I need to find reasons to “not fund” at least some of the grants, so I hunt for some that are hopeless.

Sometimes I can’t find any. Sometimes I find one or two teams who have not really demonstrated significant expertise yet. They may have fewer publications, or ones that are not directly related to the topic in hand, in which case I may regard that as an objective reason to place them near the bottom.

I also look for really stellar ideas. But given I don’t know the disciplines as well as the researchers themselves, and the grants are typically very well presented, all the ideas look great to me!

But at the panel meeting, I find that opinions vary about whether ideas are brilliant or not. Sometimes the most brilliant ideas are the most divisive. So inevitably track record and recent publications tend to count for more.

I work through and I nearly always find two applications where the researchers have recently struck gold. They now want to follow their gold seam and keep harvesting exciting results. None of us think that papers in Nature and Science, or other high impact factor journals, are everything – but we all recognise such papers have cleared a high bar based on interest and tough reviewing – so if I do see applications with top recent publications, I often do rank them near the top.

Now my top two spaces are filled.

And the rest

Suddenly I realise that with a 20% success rate, no more, or perhaps only one more grant in my pile will be funded. With two grants at the top, and perhaps two at the bottom, I now have six left. But they all look good.

Nevertheless my job is to rank them. I will be inclined to rank the grant I understand best or the one that will be most exciting if the project works at the top. Or the one that comes from a group whose record is really impressive. Panels tend to agree on track records, so past performance does count disproportionately.

Interestingly, while the reviewers' reports can be very helpful – like most researchers I like to make my own decisions – generally these are only influential if everyone feels out of their depth on the topic.

Applicants should not be too worried by preliminary referee reports because the panels may well overrule both extreme negative and positive reports. When you see your referee reports each year, be careful not to over-react. All referees feel they have to say some good and some bad things.

The panel meetings always go smoothly with some lively discussions but few serious disagreements, and there are seldom major surprises.

So what are the problems?

Dealing with rejection

There are several challenges, and all get much worse as the “success rate” falls below 20%.

Lots of people miss out, feel bad and get no real feedback. When the process allows feedback, my comments are usually: “There was nothing wrong with your grant, it just was ranked lower than some unbeatable grants.”

Where I have seen detailed feedback being provided, I often feel it is counter-productive. Applicants can take it too seriously, not realising that a different panel may assess their grant in the next round and may value different things. Researchers should take advice from experienced colleagues, but ultimately most people are best placed to direct their own projects.

imageMost grant applications are rejected, which can be a painful experience, particularly for early career researchers.Sean MacEntee/Flickr, CC BY

Another problem is that when someone’s grant is rejected they feel the system is unfair. There are some biases. High profile, older groups may have an advantage over early career new comers but people try to be fair and to score achievements relative to opportunity and take disadvantage into account.

Sometimes I feel we should have a system like they have at school sport: the Under 15s should compete only against other Under 15s, etc. Such systems are used for assigning fellowships, but seldom project grants.

There is also a problem that the public and our politicians may feel the wrong research is being funded, that panels are prioritising papers over industrial applications, and not valuing the work that society wants.

All I can say is that, in my experience, the funding bodies do their very best to get the smartest and most experienced people into the panel rooms and these people do their very best to pick the best research, taking pretty much everything they can into account. I would also say the achievements of modern science in the past 50 years or so suggest that overall, things are working.

Others say that granting panels tend to be conservative and don’t recognise or reward new, “out-of-the-box” ideas. There will always be an element of truth in this. But top researchers tend to be smart enough to write grants that will appeal to broad panels and then to try their riskiest research on the side and develop it into a proposal only when evidence to support it has accumulated.

Could we save time?

The processes are a huge amount of work. If the success rate is 20%, then 80% of the effort may appear wasted. It is not entirely wasted: writing grants does exercise the mind and having a demonstrably in-depth process is important for academics and for the tax-paying public. But we should all work to streamline grant ranking.

Simple things like reducing the number of pages required, or adding “just-in-time” approaches, where information (like a detailed budget) is collected only from the 20% of people who are actually awarded a grant, can also reduce workload.

Other systems, such as early culls where the bottom half of grants do not progress, also reduce the workload but have the obvious drawback that not all grants get a detailed hearing at the panel meeting.

The idea of having less burdensome “expressions of interest” stages, where applicants put in a short preliminary application before investing significantly in their applications, can work. But it can also backfire by drawing out applications from people who might not otherwise apply.

Interestingly, having random deadlines for applications very close to Christmas or having no deadline at all can reduce the number of applications. In both cases, those who are self-motivated apply, but those who are entering the game due to institutional or professional pressure may not get round to putting grants in. My experience is that there are few such people these days and more importantly stochastic deadlines count against people with fixed commitments such as carers.

Overall, we would be better served by having strict timelines that did not vary from year to year. Having set annual grant submission as well as outcome announcement dates is the optimal system – perhaps all grants in the day before Valentine’s Day and results out on Grand Final day – each and every year (Melbourne Cup Day is too late).

Advice for new players

Most institutions are working harder on mentoring and now provide advice to junior academics. But the best advice is that it will always be tough and that will never change. Even if everyone receives significant mentoring, and grants are elaborately polished, the success rates are likely to remain around 20%.

If more funding becomes available, the number of applications will climb. The opposite is also true but is damaging and wasteful for the sector and for society.

Other important advice is to ask experienced colleagues to read draft applications and take their advice, and get involved in grant reviewing and contributing to panels if you can. This helps people to understand how things are viewed from the other side.

And finally, do get out there on the conference circuit so that you can explain your work in person to potential panel members and reviewers. If someone reads an application having heard your talk they are much more likely to understand it, appreciate its significance and recognise your energy and momentum.

Merlin Crossley works for the University of New South Wales and receives funding from the Australian Research Council and National Health and Medical Research Council. He is a Trustee of the Austratlian Museum, a board member of the Australian Science Media Centre, the Sydney Institute of Marine Science, UNSW Innovations, UNSW Press, and a Council Member of EMBO Australia.

Authors: The Conversation Contributor

Read more http://theconversation.com/the-ins-and-outs-of-research-grant-funding-committees-49900

Business News

Insulation Solutions for Meeting Modern Industrial Standards

As global energy costs soar and environmental regulations tighten, industries face unprecedented pressure to optimise their operations while minimising their ecological footprint. Modern industrial ...

Daily Bulletin - avatar Daily Bulletin

How Australian Startups Should Responsibly Collect, Use and Store Customer Data?

Owing to the digital landscape, data is the most important currency in the market. From giant e-commerce sharks to small businesses, every company is investing heavily to responsibly collect data an...

Daily Bulletin - avatar Daily Bulletin

Revolutionising Connections - The Power of Customer Engagement Software

As time goes by, customer expectations keep on rising ever so rapidly. Businesses that must keep pace will need future-ready tools to deliver connectedness at every touchpoint. Customer engagement a...

Daily Bulletin - avatar Daily Bulletin

LayBy Deals