Daily Bulletin

  • Written by Merlin Crossley, Deputy Vice-Chancellor Academic and Professor of Molecular Biology, UNSW
When to trust (and not to trust) peer reviewed science

The article is part of our occasional long read series Zoom Out, where authors explore key ideas in science and technology in the broader context of society.

The words “published in a peer reviewed journal” are sometimes considered as the gold standard in science. But any professional scientist will tell you that the fact an article has undergone peer review is a long way from an ironclad guarantee of quality.

To know what science you should really trust you need to weigh the subtle indicators that scientists consider.

Read more: Why I disagree with Nobel Laureates when it comes to career advice for scientists

Journal reputation

The standing of the journal in which a paper is published is the first thing.

For every scientific field, broad journals (like Nature, Science and Proceedings of the National Academy of Science) and many more specialist journals (like the Journal of Biological Chemistry) are available. But it is important to recognise that hierarchies exist.

Some journals are considered more prestigious, or frankly, better than others. The “impact factor” (which reflects how many citations papers in the journal attract) is one simple, if controversial measure, of the importance of a journal.

In practice every researcher carries a mental list of the top relevant journals in her or his head. When choosing where to publish, each scientist makes their own judgement on how interesting and how reliable their new results are.

If authors aim too high with their target journal, then the editor will probably reject the paper at once on the basis of “interest” (before even considering scientific quality).

If an author aims too low, then they could be selling themselves short – this could represent a missed opportunity for a trophy paper in a top journal that everyone would recognise as significant (if only because of where it was published).

Read more: Not just available, but also useful: we must keep pushing to improve open access to research

Researchers sometimes talk their paper up in a cover letter to the editor, and aim for a journal one rank above where they expect the manuscript will eventually end up. If their paper is accepted they are happy. If not, they resubmit to a lower ranked, or in the standard euphemism, a “more specialised journal”. This wastes time and effort, but is the reality of life in science.

Neither editors nor authors like to get things wrong. They are weighing up the pressure to break a story with a big headline against the fear of making a mistake. A mistake in this context means publishing a result that becomes quickly embroiled in controversy.

To safeguard against that, three or four peer reviewers (experienced experts in the field) are appointed by the editor to help.

The peer review process

At the time of submitting a paper, the authors may suggest reviewers they believe are appropriately qualified. But the editor will make the final choice, based on their understanding of the field and also on how well and how quickly reviewers respond to the task.

The identity of peer reviewers is usually kept secret so that they can comment freely (but sometimes this means they are quite harsh). The peer reviewers will repeat the job of the editor, and advise on whether the paper is of sufficient interest for the journal. Importantly, they will also evaluate the robustness of the science and whether the conclusions are supported by the evidence.

This is the critical “peer review” step. In practice, though, the level of scrutiny remains connected to the standing of the journal. If the work is being considered for a top journal, the scrutiny will be intense. The top journals seldom accept papers unless they consider them to be not only interesting but also water tight and bullet proof – that is they believe the result is something that will stand the test of time.

If, on the other hand, the work is going into a little-read journal with a low impact factor, then sometimes reviewers will be more forgiving. They will still expect scientific rigour but are likely to accept some data as inconclusive, provided the researchers point out the limitations of their work.

Knowing this is how the process goes, whenever a researcher reads a paper they make a mental note of where the work was published.

Read more: What was missing in Australia's $1.9 billion infrastructure announcement

Journal impact factor

Most journals are reliable. But at the bottom of the list in terms of impact lie two types of journals:

  1. respectable journals that publish peer reviewed results that are solid but of limited interest – since they may represent dead ends or very specialist local topics

  2. so-called “predatory” journals, which are more sinister – in these journals the peer review process is either superficial or non-existent, and editors essentially charge authors for the privilege of publishing.

Professional scientists will distinguish between the two partly based on the publishing house, and even the name of the journal.

The Public Library of Science (PLOS) is a reputable publisher, and offers PLOS ONE for solid science – even if it may only appeal to a limited audience.

Read more: Universities spend millions on accessing results of publicly funded research

Springer Nature has launched a similar journal called Scientific Reports. Other good quality journals with lower impact factors include journals of specialist academic societies in countries with smaller populations – they will never reach a large audience but the work may be rock solid.

Predatory journals on the other hand are often broad in scale, published by online publishers managing many titles, and sometimes have the word “international” in the title. They are seeking to harvest large numbers of papers to maximise profits. So names like “The International Journal of Science” should be treated with caution, whereas the “Journal of the Australian Bee Society” may well be reliable (note, I invented these names just to illustrate the point).

The value of a journal vs a single paper

Impact factors have become controversial because they have been overused as a proxy for the quality of single papers. However, strictly applied they reflect only the interest a journal attracts, and may depend on a few “jackpot” papers that “go viral” in terms of accumulating citations.

Additionally, while papers in higher impact journals may have undergone more scrutiny, there is more pressure on the editors and on the authors of these top journals. This means shortcuts may be taken more often, the last, crucial control experiment may never be done, and the journals end up being less reliable than their reputations imply. This disconnect sometimes generates sniping about how certain journals aren’t as good as they claim to be – which actually keeps everyone on their toes.

While all the controversies surrounding impact factors are real, every researcher knows and thinks about them or other journal ranking systems (SNP – Source Normalised Impact per Paper, SJR – Scientific Journal Rankings, and others) when they are choosing which journal to publish in, which papers to read, and which papers to trust.

Read more: Science isn't broken, but we can do better: here's how

Nothing is perfect

Even if everything is done properly, peer review is not infallible. If authors fake their data very cleverly, for example, then it may be difficult to detect.

Deliberately faking data is, however, relatively rare. Not because scientists are saints but because it is foolish to fake data. If the results are important, others will quickly try to reproduce and build upon them. If a fake result is published in a top journal it is almost certain to be discovered. This does happen from time to time, and it is always a scandal.

Errors and sloppiness are much more common. This may be related to the increasing urgency, pressure to publish and prevalence of large teams where no one may understand all the science. Again, however, only inconsequential mistakes will survive – most important errors will quickly be picked up.

Read more: Not just available, but also useful: we must keep pushing to improve open access to research

Can you trust the edifice that is modern science?

Usually, one can get a feel for how likely it is that a piece of peer reviewed science is solid. This comes through relying on the combination of the pride and the reputation of the authors, and of the journal editors, and of the peer reviewers.

So I do trust the combination of peer review system and the inherent fact that science is built on previous foundations. If those are shaky, the cracks will appear quickly and things will be set straight.

I am also heartened by new opportunities for even better and faster systems that are arising as a result of advances in information technology. These include models for post-publication (rather than pre-publication) peer review. Perhaps this creates a way to formalise discussions that would otherwise happen on Twitter, and that can raise doubts about the validity of published results.

Read more: Bored reading science? Let's change how scientists write

The journal eLife is turning peer review on its head. It’s offering to publish everything it deems to be of sufficient interest, and then letting authors choose to answer or not answer points that are raised in peer review after acceptance of the manuscript. Authors can even choose to refrain from going ahead if they think the peer reivewers’ points expose the work as flawed.

ELife also has a system where reviewers get together and provide a single moderated review, to which their names are appended and which is published. This prevents the problem of anonymity enabling overly harsh treatment.

All in all, we should feel confident that important science is solid (and peripheral science unvalidated) due to peer review, transparency, scrutiny and reproduction of results in science publication. Nevertheless in some fields where reproduction is rare or impossible – long term studies depending on complex statistical data – it is likely that scientific debate will continue.

But even in these fields, the endless scrutiny by other researchers, together with the proudly guarded reputations of authors and journals, means that even if it will never be perfect, the scientific method remains more reliable than all the others.

Authors: Merlin Crossley, Deputy Vice-Chancellor Academic and Professor of Molecular Biology, UNSW

Read more http://theconversation.com/when-to-trust-and-not-to-trust-peer-reviewed-science-99365

Business News

A Guide to Finance Automation Software

When running a business, it is critical to streamline certain processes to maintain efficiency. Too much to spent manually on tasks can wind up being detrimental to the overall health of the organis...

Daily Bulletin - avatar Daily Bulletin

Top Tips for Cost-effective Storefront Signage

The retail industry is highly competitive and if you are in the process of setting up a retail store, you have come to the right place, as we offer a few tips to help you create a stunning storefront...

Daily Bulletin - avatar Daily Bulletin

How Freight Forwarding Simplifies Global Trade Operations

Global trade operations are becoming increasingly complex due to international regulations, customs procedures, and the sheer scale of global logistics. For businesses looking to expand internation...

Daily Bulletin - avatar Daily Bulletin