The week ends with a series of articles in Science that make you roll your eyes. These articles explore different aspects of the landscape of science communication exposing how broken the system can be at times. The increased pressure to publish scientific results to satisfy some assessors’ need to count beans has not come without a heavy demand on the scientific community that inevitably becomes involved through free editorial and peer review services. For every paper that is published, there are a number of other scientists that take time off their daily work to contribute to the decision of whether the article should be published or not, in principle by assessing the scientific rigor and quality. In many cases, and unless the article is accepted by the first journal it is submitted to, this cycle is repeated. Over. And over. Again. The manuscript is submitted to a new journal, handled by a new editor and most probably reviewed by a new set of peers, this iterated as many times as needed until a journals takes the paper in. And then comes the back and forth of the revision process, modifications to the original article suggested or required through the peer review, until eventually the manuscript is published. Somewhere. Number of beans = n+1. Good on’ya!
But what is the cost?
There just doesn’t seem to be enough time to go this process with the level of rigor it promises to deliver. The rise in multidisciplinary research means that it will be unlikely that a single reviewer can assess the entirety of a manuscript. The feedback we get as editors (or we provide as reviewers) can often be incomplete and miss fundamental scientific flaws. There are pressures to publish and to publish a lot and to do that (and still have something to publish about) we are tempted to minimise the amount of time that we spend in the publication cycle. Marcia McNutt says it in a nutshell :
For science professionals, time is a very precious commodity.
It is then not surprising that the exhaustion of the scientific community would be exploited with the ‘fast food’ equivalent of scientific communication.
The vitality of the scientific meeting has given rise to a troubling cottage industry: meetings held more for profit than enlightenment 
The same applies to some so-called scientific journals. These “predatory” practices as they have come to be known are exhausting.
Science published today the description of a carefully planned sting. Jon Bohannon created a spoof paper that he sent to a long list of Open Access journals . The paper should have been rejected had anyone cared enough to assess the quality of the science and base their decision on that. Instead, the manuscript made it through and got accepted in a number of journals (98 journals rejected it, 157 accepted it). That the paper got accepted in more than one journal did not come as a surprise, but what where it got interesting to me was when he compared those accepting journals against Beall’s predatory journal list. Jeff Beall helps collate a list of predatory Open Access journals, which at least saves us from having to do even more research when trying to decide where to publish our results or what conferences we might want to attend.
Like Batman, Beall is mistrusted by many of those he aims to protect. “What he’s doing is extremely valuable,” says Paul Ginsparg, a physicist at Cornell University who founded arXiv, the preprint server that has become a key publishing platform for many areas of physics. “But he’s a little bit too trigger-happy. 
What Bohannon’s experiment showed was that 82% of the publishers from Beall’s list that received the spoof paper accepted it for publication. There is no excuse to falling prey to these journals and conferences. “I didn’t know” just won’t cut it for much longer.
As Michael Eisen discusses, even though Bohannon used open access journals for his experiment, this lack of rigour seems to ignore paywalls, impact factors and journal prestige. Which raises the following question:
If the system is so broke, it costs so much money in subscriptions and publication fees and sucks so much out of our productive time – then why on earth should we bother?
Don’t get me wrong – sharing our findings is important. But does it all really have to be peer reviewed from the start? Take Mat Todd’s approach, for example, from the Open Source Malaria project. All the science is out there as soon as it comes out of the pipette tip. When I asked him how this changed the way his research cycle worked this is what he said:
We have been focusing on the data and getting the project going, so we have not rushed to get the paper out. The paper is crucial but it is not the all and all. The process has been reversed, we first share the data and all the details of the project as it’s going, then when we have finished the project we move to publishing.
Right. Isn’t this what we should all be doing? I didn’t see Mat Todd’s world collapse. There is plenty of opportunity to provide peer review on the project as it is moving forward. There is no incentive to write the paper immediately, because the information is out there. There is no need to take up time from journal editors and reviewers because the format of the project offers itself to peer review from anyone who is interested in helping get this right.
PeerJ offers a preprint publication service:
“By using this service, authors establish precedent; they can solicit feedback, and they can work on revisions of their manuscript. Once they are ready, they can submit their PrePrint manuscript into the peer reviewed PeerJ journal (although it is not a requirement to do so)”
F1000 Research does something similar:
“F1000Research publishes all submitted research articles rapidly […] making the new research findings open for scrutiny by all who want to read them. This publication then triggers a structured process of post-publication peer review […]”
So yes, you can put your manuscript out there, let peers review it at their leisure, when they actually care and when they have time and focus to actually do a good job. There is really no hurry to move the manuscript to the peer-reviewed journal (PeerJ or any other) because you have already communicated your results, so you might as well go get an experiment done. And if, as a reviewer, you want any credit for your contribution, then you can go to Publons where you can write your review, and if the community thinks you are providing valuable feedback you will be properly rewarded in the form of a DOI. Try to get that kind of recognition from most journals.
But let’s say you are so busy actually getting science done, then you always have FigShare.
“…a repository where users can make all of their research outputs available in a citable, shareable and discoverable manner.”
Because, let’s be honest, other than the bean counters who else is really caring enough about what we publish to justify the amount of nonsense that goes with it?
According to ImpactStory, 20% of the items that were indexed by Web of Science in 2010 received 4 or less PubMed Central citations. So, 4 citations in almost 3 years puts yo at the top 20%.
So my question is: Is this nonsense really worth our time?
 McNutt, M. (2013). Improving Scientific Communication. Science, 342(6154), 13–13. doi:10.1126/science.1246449
 Stone, R., & Jasny, B. (2013). Scientific Discourse: Buckling at the Seams. Science, 342(6154), 56–57. doi:10.1126/science.342.6154.56
 Bohannon, J. (2013). Who’s Afraid of Peer Review? Science, 342(6154), 60–65. doi:10.1126/science.342.6154.60
Try entering “failure to replicate” in a google search (or better still, let me do that for you) and you will find no shortage of hits. You can even find a reproducibility initiative. Nature has a whole set of articles on the topic. If you live in New Zealand you have probably not escaped the coverage in the news about the botulism bacteria that never was, and you might be among those puzzled about how a lab test could be so “wrong”.
Yet, for scientists working in labs, this issue is commonplace.
Most scientists will acknowledge that reproducing someone else’s published results isn’t always easy. Most will also acknowledge that there they would receive little recognition for replicating someone else’s results. They may even add that the barriers to publish negative results are also too high. The bottom line is that there is little incentive to encourage replication, more so in a narrowing and highly competitive funding ecosystem.
However, some kind of replication happens almost on a daily basis in our labs as we adopt techniques described by others and try to adapt them to our own studies. A lot of time and money can be wasted when the original article does not provide enough detail on the materials and methods. Sometimes authors (consciously or unconsciously) do not articulate explicitly domain-specific tacit knowledge about their procedures, something which may not be easy to resolve. But in other cases, articles just simply lack enough detail about what specific reagents were used in an experiment, like a catalog number, and this is something may be able to fix more easily.
Making explicit the experiment’s reagents would should be quite straightforward, but apparently it is not, at least according to the new study published in PeerJ*. Vasilevsky and her colleagues surveyed articles in a number of journals and from different disciplines and recorded how well documented the raw materials used in the experiments were described. In other words, could anyone, relying solely on information provided in the article, be sure they would be buying the exact same chemical?
Simple enough? Yeah, right.
What their data exposed was a rather sad state of affairs. Based on their sample they concluded that the reporting of “unique identifiers” for laboratory materials is rather poor and they could only unambiguously identify 56% of the resources. Overall, just a little over half of the articles don’t give enough information for proper replication. Look:
But not all research papers are created equal. A breakdown by research discipline and by type of resource shows that some areas or types of reagents do better than others. Papers in immunology, for example tend to report better than papers in neuroscience.
So, could journals for immunology be better quality or have higher standards than the journals for neuroscience?
The authors probably knew we would ask that, and they beat us to the punch.
(Note: Apparently, the IF does not seem to matter when it comes to the quality of reporting on materials**. )
What I found particularly interesting was that whether a journal had good guidelines on reporting didn’t seem to make much of a difference. It appears the problem is more deeply rooted and these seeping through the submission, peer review and editorial process. How come neither authors, reviewers or editors are making sure that the reporting guidelines are followed? (Which in my opinion beats the purpose of having them there in the first place!)
I am not sure I perform myself too much above average (I must confess I am too scared to look!). As authors we may be somewhat blind to how well (or not) we articulate our findings because we are too embedded in the work, missing things that may be obvious to others. Peer reviewers and editors tend to pick up on our blind spots much better than us. Yet apparently a lot that still does not get picked up. Peer-reviewers don’t seem to be picking up on these reporting issues, perhaps they make assumptions based on what is standard in their their particular field of work. Editors may not detect what is missing because they are relying on the peer-review process to identify reporting shortcomings especially when the work is outside their field of expertise. But while I can see how not getting it right can happen, I also see the need to get it right.
While I think all journals should have clear guidelines for reporting materials (the authors developed a set of guidelines that can be found here), Vasilevsky and her colleagues showed that having them in place was not necessarily enough. Checklists similar to those put out by Nature [pdf] to help authors, reviewers and editors might help to minimise the problem.
I would, of course, love to see this study replicated. In the meantime I might give a go at playing with the data.
*Disclosure: I am an academic editor, author and reviewer for PeerJ and obtained early access to this article.
** no, I will not go down this rabbit hole
Vasilevsky et al. (2013), On the reproducibility of science: unique identification of research resources in the biomedical literature. PeerJ 1:e148; DOI 10.7717/peerj.148
2012 was a really interesting year for Open Research.
The year started with a boycott to Elsevier (The Cost of Knowledge) , soon followed in May by a petition at We The People in the US, asking the US government to “Require free access over the Internet to scientific journal articles arising from taxpayer-funded research.”. By June we had The Royal Society publishing a paper on “science as an open enterprise” [pdf] saying:
The opportunities of intelligently open research data are exemplified in a number of areas of science.With these experiences as a guide, this report argues that it is timely to accelerate and coordinate change, but in ways that are adapted to the diversity of the scientific enterprise and the interests of: scientists, their institutions, those that fund, publish and use their work and the public.
The Finch report had a large share of media coverage [pdf] -
Our key conclusion, therefore, is that a clear policy direction should be set to support the publication of research results in open access or hybrid journals funded by APCs. A clear policy direction of that kind from Government, the Funding Councils and the Research Councils would have a major effect in stimulating, guiding and accelerating the shift to open access.
By July the UK government announced the support for the Open Access recommendations from the Finch Report to ensure:
Walk-in rights for the general public, so they can have free access to global research publications owned by members of the UK Publishers’ Association, via public libraries. [and] Extending the licensing of access enjoyed by universities to high technology businesses for a modest charge.
The Research Councils OK joined by publishing a policy on OA (recently updated) that required [pdf] :
Where the RCUK OA block grant is used to pay Article Processing Charges for a paper, the paper must be made Open Accesess immediately at the time of on line publication, using the Creative Commons Attribution (CC BY) licence.
By the time that Open Access Week came around, there was plenty to discuss. The discussion of Open Access emphasised more strongly the re-use licences under which the work was published. The discussion also included some previous analysis showing that there are benefits from publishing in Open Access that affect economies:
adopting this model could lead to annual savings of around EUR 70 million in Denmark, EUR 133 in The Netherlands and EUR 480 million in the UK.
And in November, the New Zealand Open Source Awards recognised Open Science fro the first time too.
2013 promises not to fall behind
This year offers good opportunities to celebrate local and international advocates of Open Science.
The Obama administration not only responded to last year’s petition by issuing a memorandum geared towards making Federally funded research adopt open access policies, but is now also seeking “Outstanding Open Science Champions of Change” . Nominations for this close on May 14, 2013. Simultaneously, The Public Library of Science, Google and the Wellcome Trust , together with a number of allies are sponsoring the “Accelerating Science Award Program” which seeks to recognise and reward individuals, groups or projects that have used Open Access scientific works in innovative manners. The deadline for this award is June 15.
Last year Peter Griffin wrote:
The policy shift in the UK will open up access to the work of New Zealand scientists by default as New Zealanders are regularly co-authors on papers paid for by UK Research Councils funds. But hopefully it will also lead to some introspection about our own open access policies here.
There was some reflection at the NZAU Open Research Conference which led to the Tasman Declaration – (which I encourage you to sign) and those of us who were involved in it are hoping good things will come out of it. While that work continues, I will be revisiting the nominations of last years Open Science category for the NZ Open Source Awards to make my nominations for the two awards mentioned above.
I certainly look forward to this year – I will continue to work closely with Creative Commons Aotearoa New Zealand and with NZ AU Open Research to make things happen, and continue to put my 2 cents as an Academic Editor for PLOS ONE and PeerJ.
There is no question that the voice of Open Access is now loud and clear – and over the last year it has also become a voice that is not only being heard, but that it also generating the kinds of responses that will lead to real change.
It is December 3.
It is also the day that PeerJ starts receiving manuscript submissions. I talked about PeerJ before and why I was so enthusiastic about its launch. Over the last while I have been experiencing PeerJ as a user.
Some of us academic editors were able to do some website testing for the article submission site, and I have to say I am impressed. Truth be told, the most painful part of submitting a paper has been, in my experience, being confronted with those horrid manuscript submission sites. When I started working in science there were no computers. We typed (yes, remember the typewriter?) our manuscripts, printed our pictures in the dark room, drew our graphs by hand with rotring pens and letraset and put the lot in an envelope. With a stamp. And walked the envelope to the Post Office.
Then came the electronic submission, and it seems that those who designed those sites knew that our high motivation level to submit would make us be able to endure their site’s, well, unfriendliness (oh and those dreadful pop-up windows!). They were right. Our motivation to submit a paper is high enough that we overlook the nuisance of the submission system – it is not a factor in the decision of where to submit. I find myself sometimes putting an entire afternoon aside just to upload the files on their system, and I have become accustomed to this, I have been doing it for years. And I know that any submission or editorial task will have to wait until I am at my desktop computer because navigating those sites on my netbook or my tablet is, well, not worth the effort
So needless to say, opening up the PeerJ system was nothing more than a yay moment. Finally someone thought about me, me, me.
The first thing I loved was that I just need to login to my account at PeerJ.com and from there I have the links to whatever I need: my profile, my manuscripts, my reviewer dashboard and my editor dashboard. None of that looking for the email that has the web address for the editorial manager system; even my tired old brain can remember that url. Even better, I can do that from my netbook, my tablet, my mobile phone, because the site loads really nicely in all my devices. The plus side of this is that when I think about checking something I can just go ahead and do it. Easily
Submitting the manuscript was a completely new experience. In my opinion they have done a few things right: a good visual (and intuitive) toolbar (text comes up on mouse over) and a hint box at the right of the screen.
As I moved from one page to another, the hintbox was always there to answer most of my questions, or send me to the instructions to authors – again, with a really nice and intuitive layout.
I never found myself second guessing what it is what I needed to do, or how to do it. And for that PeerJ deserves a hat tip.
But one of the things that impressed me the most, were the requirements under the “Declarations” section. There are a lot of things there that impressed me. Firstly, the detailed description of the Animal Ethics (not just that your University Committee approved it), the request for agreement for people to be acknowledged, the declaration of conflict of interest and any type of funding, etc. I think this is a good thing. I found it tedious at first. But when I started thinking about it more, I think this is a great step for better scientific standards. And I hope they keep on having those requirements, and hope more journals follow suit. And a second hat tip for contacting all of the listed authors to inform them someone has submitted a manuscript with their name on it. I am still shocked some journals still do not do this!
I am now acting as an academic editor for another manuscript, and the experience from that end is no different. The system is simple and intuitive which makes my job easier. From an editor’s point of view what I liked the most was the page where I had to choose/load reviewers. I had on that page a list of suggested reviewers by the authors and those that authors opposed, so there was no need of navigating different windows to get that information. Made a mistake and want to get rid of a reviewer? Just click on the trash can. On that page, also nicely visible are the links to tools to help me find reviewers (JANE, PubMed and Google Scholar). Now what was a really nice touch (lke the links weren’t enough!) was that clicking on any of those links automatically ran a query for me based on title and keywords of the article – one less thing for me to do (unless I need to for some reason). So another hat tip for that – and I think that rounds up the hat trick.
Now, what a bright idea – make the system user friendly! You’d think those in the Science Publishing system would have already figured that out, eh?
Every now and then something happens that gets me all excited about what comes next.
Today, it is the launch of PeerJ
Over 10 years ago I was approached by someone at a scientific conference who told me they were launching something that was to be called the Public Library of Science (PLoS), where people could publish their results and make it freely available to anyone, anywhere. The catch: authors paid for the publication cost. I wasn’t sure what to think of it. Yes, I would be totally behind it, and thought the ethos rocked but was not sure how they would get authors to pay for things they would otherwise be able to publish for ‘free’*.
Soon after that I moved to New Zealand and PLoS fell off my radar. Until 2006 when we decided to submit a paper to PLoS Biology. We got a letter back saying that we should instead submit to a new Journal they were launching: PLoS ONE, and that is where the paper got published. I immediately fell in love with PLoS ONE. But I had to wait over 3 years to become an Academic Editor, after meeting I think Steve Koch at Science Online 2010. Another decision I am proud of.
In 2009 I was visiting family in Minnesota, and decided to delay my return to New Zealand to attend SciBarCamp in Palo Alto. I had just been to my first unconference (KiwiFoo) and decided to give SciBarCamp a go. Best decision I ever made. It was there I first met Peter Binfield (0f PLoS ONE fame) and Jason Hoyt (who are responsible for PeerJ). There were many things that were said at that un-conference, but I vividly recall Jason’s session on Mendeley and Peter’s session on the future of publishing.
Well, it has been 3 years since then and now is the time for PeerJ.
What is special about it? It does not seem to be ‘another Open Access Journal’ but rather a completely different way of thinking of how authors and journals work together to put scientific results out there. It appears, to me and from what information I have access to, as a partnership. Scientists pay a membership fee and that allows them to publish there. For Free**. In return they commit to providing at least one review a year. Seems like a fair deal. I still find it amazing that at this time and age the majority of published science is ‘read only’. (Shocking, I know!) so I am keen to see how the post-publication interaction with the article (and the pre-publication record) will look like.
It is the sense of ‘partnership’ that I am also attracted to (and got me all excited). I have for some time been thinking whether there should be an ‘Open Science Society’ with its own journals, similar to other societies. A membership fee would subsidise the journal, and everything would be open access. Well, PeerJ is not exactly that, but it comes quite close. I actually like the idea of membership (with its perks) because it makes me the scientist care about that journal in a slightly different way. I am not sure whether Peter and Jason had this ‘partnership’ in mind, but it might just end up becoming that. And that might be a huge game-changer.
Well, we’ve come a long way since the first scientific journal was published back in the 1600’s, and not much had changed since then, other than the font. PLoS changed the game, and they did that so well that they are now one of the biggest scientific publishers. And it is now the turn of PeerJ.
I have a lot of respect for both Peter Binfield and Jason Hoyt (since I first met them in 2009). And I also see that they have Tim O’Reilly in their governing board (someone that deserves an un-interrupted series of hat tips as well).
So, paraphrasing a SciBarCamp question…
What would scientific publishing look like if it was invented today?
We might just be about to find out.*Well, we still pay to see the article. And in many cases we pay costs of publishing like colour figures, etc. But we tend to not think too much about that. Oh, yes, and of course we transfer our copyright – lest Wikipedia make something interesting with them.
**Different membership levels have different publishing privileges. But you can visit the site to get that nitty gritty.