View all news

Criteria for funding and promotion leads to bad science

Press release issued: 10 November 2016

Scientists are trained to carefully assess theories by designing good experiments and building on existing knowledge. But there is growing concern that too many research findings may be wrong.

New research conducted by psychologists at the universities of Bristol and Exeter suggests that this may happen because of the criteria that seem to be used in funding science and promoting scientists, which place too much weight on eye-catching findings.

According to their findings, released 10 November in open-access journal PLOS Biology, some scientists are becoming concerned that published results are inaccurate – a recent attempt by 270 scientists to reproduce the findings reported in 100 psychology studies (the Reproducibility Project: Psychology) found that only about 40 per cent could be reproduced.

Professor Marcus Munafò from Bristol's School of Experimental Psychology and Dr Andrew Higginson from Exeter, concluded that we shouldn't be surprised by this, because researchers are incentivised to work in a certain way if they want to further their careers.

Their study showed that scientists aiming to progress should carry out lots of small, exploratory studies because this is more likely to lead to surprising results. The most prestigious journals publish only highly novel findings, and scientists often win grants and get promotions if they manage to publish just one paper in these journals, which means that these small (but unreliable) studies may be disproportionately rewarded in the current system.

The authors used a mathematical model to predict how an optimal researcher who is trying to maximise the impact of their publications should spend their research time and effort. Scientific researchers have to decide what proportion of time to invest in looking for exciting new results rather than confirming previous findings. They also must decide how much resource to invest in each experiment. The model assumed that novel findings are much more important than confirmatory work that checks previous findings.

The model shows that the best thing for career progression is carry out lots of smaller exploratory studies, rather than the larger confirmatory ones. Even though each experiment is less likely to identify a real effect if it's there, they are likely to get some false positives, which unfortunately are often published too. 

Dr Higginson said: “This is an important issue because so much money is wasted doing research from which the results can't be trusted; a significant finding might be just as likely to be a false positive as actually be measuring a real phenomenon."

He continued "While our model doesn't represent how things are done in all the sciences, it is likely to apply across a broad range of science and social science disciplines that involve data collection and hypothesis testing."

So is there any way to overcome this problem of bad scientific practice?  There could be immediate solutions, as Professor Munafò explained: “Journal editors and reviewers could be much stricter about good statistical procedures, such as insisting on large sample sizes and tougher statistical criteria for deciding whether an effect has been found.”

There are already some encouraging signs – for example, a number of journals are introducing reporting checklists which require authors to state, among other things, how they decided on the sample size they used. Funders are also making similar changes to grant application procedures.

However, the researchers suggest that funding and promotion criteria should be re-thought. “The best thing for scientific progress would be a mixture of medium-sized exploratory studies with large confirmatory studies,” said Dr Higginson.  “Our work suggests that researchers would be more likely to do this if funding agencies and promotion committees rewarded asking important questions and good methodology, rather than surprising findings and exciting interpretations.”

Edit this page