Building Blogs of Science

Predatoromics of science communication

Posted in Science, Science and Society by kubke on October 4, 2013

CC-BY mjtmail (tiggy) on Flickr

The week ends with a series of articles in Science that make you roll your eyes. These articles explore different aspects of the landscape of science communication exposing how broken the system can be at times. The increased pressure to publish scientific results to satisfy some assessors’ need to count beans has not come without a heavy demand on the scientific community that inevitably becomes involved through free editorial and peer review services. For every paper that is published, there are a number of other scientists that take time off their daily work to contribute to the decision of whether the article should be published or not, in principle by assessing the scientific rigor and quality. In many cases, and unless the article is accepted by the first journal it is submitted to, this cycle is repeated. Over. And over. Again. The manuscript is submitted to a new journal, handled by a new editor and most probably reviewed by a new set of peers, this iterated as many times as needed until a journals takes the paper in. And then comes the back and forth of the revision process, modifications to the original article suggested or required through the peer review, until eventually the manuscript is published. Somewhere. Number of beans = n+1. Good on’ya!

But what is the cost?

CC-BY Jessica M Cross on Flickr

There just doesn’t seem to be enough time to go this process with the level of rigor it promises to deliver. The rise in multidisciplinary research means that it will be unlikely that a single reviewer can assess the entirety of a manuscript. The feedback we get as editors (or we provide as reviewers) can often be incomplete and miss fundamental scientific flaws. There are pressures to publish and to publish a lot and to do that (and still have something to publish about) we are tempted to minimise the amount of time that we spend in the publication cycle. Marcia McNutt says it in a nutshell [1]:

For science professionals, time is a very precious commodity.

It is then not surprising that the exhaustion of the scientific community would be exploited with the ‘fast food’ equivalent of scientific communication.

The vitality of the scientific meeting has given rise to a troubling cottage industry: meetings held more for profit than enlightenment  [2]

The same applies to some so-called scientific journals. These “predatory” practices as they have come to be known are exhausting.

Science published today the description of a carefully planned sting. Jon Bohannon created a spoof paper that he sent to a long list of Open Access journals [3]. The paper should have been rejected had anyone cared enough to assess the quality of the science and base their decision on that. Instead, the manuscript made it through and got accepted in a number of journals (98 journals rejected it, 157 accepted it). That the paper got accepted in more than one journal did not come as a surprise, but what where it got interesting to me was when he compared those accepting journals against Beall’s predatory journal list. Jeff Beall helps collate a list of predatory Open Access journals, which at least saves us from having to do even more research when trying to decide where to publish our results or what conferences we might want to attend.

 Like Batman, Beall is mistrusted by many of those he aims to protect. “What he’s doing is extremely valuable,” says Paul Ginsparg, a physicist at Cornell University who founded arXiv, the preprint server that has become a key publishing platform for many areas of physics. “But he’s a little bit too trigger-happy. [3]

What Bohannon’s experiment showed was that 82% of the publishers from Beall’s list that received the spoof paper accepted it for publication. There is no excuse to falling prey to these journals and conferences. “I didn’t know” just won’t cut it for much longer.

As Michael Eisen discusses, even though Bohannon used open access journals for his experiment, this lack of rigour seems to ignore paywalls, impact factors and journal prestige. Which raises the following question:

If the system is so broke, it costs so much money in subscriptions and publication fees and sucks so much out of our productive time – then why on earth should we bother?

Don’t get me wrong – sharing our findings is important. But does it all really have to be peer reviewed from the start? Take Mat Todd’s approach, for example, from the Open Source Malaria project. All the science is out there as soon as it comes out of the pipette tip. When I asked him how this changed the way his research cycle worked this is what he said:

We have been focusing on the data and getting the project going, so we have not rushed to get the paper out. The paper is crucial but it is not the all and all. The process has been reversed, we first share the data and all the details of the project as it’s going, then when we have finished the project we move to publishing.

Right. Isn’t this what we should all be doing? I didn’t see Mat Todd’s world collapse. There is plenty of opportunity to provide peer review on the project as it is moving forward. There is no incentive to write the paper immediately, because the information is out there. There is no need to take up time from journal editors and reviewers because the format of the project offers itself to peer review from anyone who is interested in helping get this right.

PeerJ offers a preprint publication service:

“By using this service, authors establish precedent; they can solicit feedback, and they can work on revisions of their manuscript. Once they are ready, they can submit their PrePrint manuscript into the peer reviewed PeerJ journal (although it is not a requirement to do so)”

F1000 Research does something similar:

“F1000Research publishes all submitted research articles rapidly […] making the new research findings open for scrutiny by all who want to read them. This publication then triggers a structured process of post-publication peer review […]”

So yes, you can put your manuscript out there, let peers review it at their leisure, when they actually care and when they have time and focus to actually do a good job. There is really no hurry to move the manuscript to the peer-reviewed journal (PeerJ or any other) because you have already communicated your results, so you might as well go get an experiment done.  And if, as a reviewer, you want any credit for your contribution, then you can go to Publons where you can write your review, and if the community thinks you are providing valuable feedback you will be properly rewarded in the form of a DOI. Try to get that kind of recognition from most journals.

But let’s say you are so busy actually getting science done, then you always have FigShare.

“…a repository where users can make all of their research outputs available in a citable, shareable and discoverable manner.”

Because, let’s be honest, other than the bean counters who else is really caring enough about what we publish to justify the amount of nonsense that goes with it?

According to ImpactStory, 20% of the items that were indexed by Web of Science in 2010 received 4 or less PubMed Central citations. So, 4 citations in almost 3 years puts yo at the top 20%.

So my question is: Is this nonsense really worth our time?

CC-BY aussiegall on Flickr

[1] McNutt, M. (2013). Improving Scientific Communication. Science, 342(6154), 13–13. doi:10.1126/science.1246449

[2] Stone, R., & Jasny, B. (2013). Scientific Discourse: Buckling at the Seams. Science, 342(6154), 56–57. doi:10.1126/science.342.6154.56

[3] Bohannon, J. (2013). Who’s Afraid of Peer Review? Science, 342(6154), 60–65. doi:10.1126/science.342.6154.60

About these ads

11 Responses

Subscribe to comments with RSS.

  1. […] had enough time to write a blog post, and was lucky enough to be able to link to Michael Eisens’ take on the issue before I posted, so […]

  2. Neha Jain said, on October 22, 2013 at 15:18

    A great and thoughtful piece! I agree with your points; an on-going peer review will not only let researchers focus on the science, but the quality of the published research will be highly improved – especially if reviewers are rewarded too.

    • kubke said, on October 22, 2013 at 15:43

      Thanks neha! Hopefully we are moving in that direction.

  3. Links 10/9/13 | Mike the Mad Biologist said, on October 10, 2013 at 08:41

    […] who share data publicly receive more citations Science gone bad or the day after the sting Predatoromics of science communication (or just put it in ArXiv and move […]

  4. Nachdenken über Open Access | Hapke-Weblog said, on October 10, 2013 at 07:56

    […] die Qualität des Peer Reviews gerade von Open-Access-Journals hinterfragbar ist (vgl. den Hinweis und Kommentar zum aktuellen Special der Zeitschrift Science zur Wissenschaftskommunikation v…), wird deutlich, dass etwas gut Gedachtes bzw. Gemeintes auch seine Schattenseiten haben kann.Beim […]

  5. […] had enough time to write a blog post, and was lucky enough to be able to link to Michael Eisens’ take on the issue before I posted, so […]

  6. Curt Rice (@curtrice) said, on October 5, 2013 at 05:14

    Wonderful piece! And it’s even worse: Is this nonsense seriously what politicians and the general public want us spending our time on?! My thoughts on the current “sting”: What Science — and the Gonzo Scientist — got wrong: open access will make research better http://bit.ly/1f5JAzi

    • kubke said, on October 5, 2013 at 07:59

      Thanks for the link – it is nice to see so many people not letting Science get away with it :). What a great warmup for Open Access Week!

  7. Wikispecies editor (@stho002) said, on October 4, 2013 at 13:00

    Traditional peer review was only ever designed to ensure that the manuscript passed a very minimal standard. It was not designed to check details. Just because something is published in a peer reviewed journal doesn’t mean that it is correct, or even that it is any better than a minimal standard judged without recourse to details.

    • kubke said, on October 4, 2013 at 13:44

      I agree that such is true for [?] cases – I can’t but wonder whether we can get a reliable proportional measure (most, many, some?). I think this is something that I like about the opportunity offered by Publons – we might get an idea of how reliable peer review is overall and perhaps get some indication as to whether specific journals outperform others in the process. I know that I tend to chuck about half of what I read.

      • Wikispecies editor (@stho002) said, on October 4, 2013 at 14:57

        But that’s my point – you can’t hope to purge journals of rubbish, and peer review was only really meant to do that in the most obvious and extreme cases. Just as with commercial advertising and the popular media, etc., we are stuck with lots of rubbish. One just has to not uncritically accept anything one reads or is told. There is no magic filter for falsehoods/crap ..


Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 56 other followers

%d bloggers like this: