Building Blogs of Science

Failure to replicate, spoiled milk and unspilled beans

Posted in Science by kubke on September 6, 2013

Try entering “failure to replicate” in a google search (or better still, let me do that for you) and you will find no shortage of hits. You can even find a reproducibility initiative. Nature has a whole set of articles on the topic. If you live in New Zealand you have probably not escaped the coverage in the news about the botulism bacteria that never was, and you might be among those puzzled about how a lab test could be so “wrong”.

Yet, for scientists working in labs, this issue is commonplace.

Most scientists will acknowledge that reproducing someone else’s published results isn’t always easy. Most will also acknowledge that there they would receive little recognition for replicating someone else’s results. They may even add that the barriers to publish negative results are also too high.  The bottom line is that there is little incentive to encourage replication, more so in a narrowing and highly competitive funding ecosystem.

However, some kind of replication happens almost on a daily basis in our labs as we adopt techniques described by  others and try to adapt them to our own studies. A lot of time and money can be wasted when the original article does not provide enough detail on the materials and methods. Sometimes authors (consciously or unconsciously) do not articulate explicitly domain-specific tacit knowledge about their procedures, something which may not be easy to resolve. But in other cases, articles just simply lack enough detail about what specific reagents were used in an experiment, like a catalog number, and this is something may be able to fix more easily.

Making explicit the experiment’s reagents would should be quite straightforward, but apparently it is not, at least according to the new study published in PeerJ*. Vasilevsky and her colleagues surveyed articles in a number of journals and from different disciplines and recorded how well documented the raw materials used in the experiments were described. In other words, could anyone, relying solely on information provided in the article, be sure they would be buying the exact same chemical?

Simple enough? Yeah, right.

What their data exposed was a rather sad state of affairs. Based on their sample they concluded that  the reporting of “unique identifiers” for laboratory materials is rather poor and they could only unambiguously identify 56% of the resources. Overall, just a little over half of the articles don’t give enough information for proper replication. Look:

Replicabitily1

But not all research papers are created equal. A breakdown by research discipline and by type of resource shows that some areas or types of reagents do better than others. Papers in immunology, for example tend to report better than papers in neuroscience.

So, could  journals for immunology be better quality or have higher standards  than the journals for neuroscience?

The authors probably knew we would ask that, and they beat us to the punch.

(Note: Apparently, the IF does not seem to matter when it comes to the quality of reporting on materials**. )

What I found particularly interesting was that whether a  journal had good guidelines on reporting didn’t seem to make much of a difference. It appears the problem is more deeply rooted and these seeping through the submission, peer review and editorial process. How come neither authors, reviewers or editors are making sure that the reporting guidelines are followed? (Which in my opinion beats the purpose of having them there in the first place!)

replicability2

I am not sure I perform myself too much above average (I must confess I am too scared to look!). As authors we may be somewhat  blind to how well (or not) we articulate our findings because we are too embedded in the work, missing things that may be obvious to others. Peer reviewers and editors tend to pick up on our blind spots much better than us. Yet apparently a lot that still does not get picked up. Peer-reviewers don’t seem to be picking up on these reporting issues, perhaps they make assumptions based on what is standard in their their particular field of work. Editors may not detect what is missing because they are relying on the peer-review process to identify reporting shortcomings especially when the work is outside their field of expertise. But while I can see how not getting it right can happen, I also see the need to get it right.

While I think all journals should have clear guidelines for reporting materials (the authors developed a set of guidelines that can be found here), Vasilevsky and her colleagues showed that having them in place was not necessarily enough. Checklists similar to those put out by Nature [pdf] to help authors, reviewers and editors might help to minimise the problem.

I would, of course, love to see this study replicated. In the meantime I might give a go at playing with the data.

*Disclosure: I am an academic editor, author and reviewer for PeerJ and obtained early access to this article.

** no, I will not go down this rabbit hole

Vasilevsky et al. (2013), On the reproducibility of science: unique identification of research resources in the biomedical literature. PeerJ 1:e148; DOI 10.7717/peerj.148

2 Responses

Subscribe to comments with RSS.

  1. […] literature” article which we published yesterday. This important article received quite a bit of attention when it […]

  2. multimedialedits said, on September 6, 2013 at 00:18

    Reblogged this on Diagram Network.


Leave a comment