Building Blogs of Science

PeerJ pulls off a hat trick

Posted in Science by kubke on December 3, 2012

It is December 3.

It is the birthday* of John Backus, Richard Kuhn, Anna Freud, Carlos Juan Finlay, and, why not, Ozzy Osbourne.

It is also the day that PeerJ starts receiving manuscript submissions. I talked about PeerJ before and why I was so enthusiastic about its launch. Over the last while I have been experiencing PeerJ as a user.

Some of us academic editors were able to do some website testing for the article submission site, and I have to say I am impressed. Truth be told, the most painful part of submitting a paper has been, in my experience, being confronted with those horrid manuscript submission sites. When I started working in science there were no computers. We typed (yes, remember the typewriter?) our manuscripts, printed our pictures in the dark room, drew our graphs by hand with rotring pens and letraset and put the lot in an envelope. With a stamp. And walked the envelope to the Post Office.

Then came the electronic submission, and it seems that those who designed those sites knew that our high motivation level to submit would make us be able to endure their site’s, well, unfriendliness (oh and those dreadful pop-up windows!). They were right. Our motivation to submit a paper is high enough that we overlook the nuisance of the submission system – it is not a factor in the decision of where to submit. I find myself sometimes putting an entire afternoon aside just to upload the files on their system, and I have become accustomed to this, I have been doing it for years. And I know that any submission or editorial task will have to wait until I am at my desktop computer because navigating those sites on my netbook or my tablet is, well, not worth the effort

So needless to say, opening up the PeerJ system was nothing more than a yay moment. Finally someone thought about me, me, me.

The first thing I loved was that I just need to login to my account at PeerJ.com and from there I have the links to whatever I need: my profile, my manuscripts, my reviewer dashboard and my editor dashboard. None of that looking for the email that has the web address for the editorial manager system; even my tired old brain can remember that url. Even better, I can do that from my netbook, my tablet, my mobile phone, because the site loads really nicely in all my devices. The plus side of this is that when I think about checking something I can just go ahead and do it. Easily

peerj1

Submitting the manuscript was a completely new experience. In my opinion they have done a few things right: a good visual (and intuitive) toolbar (text comes up on mouse over) and a hint box at the right of the screen.

peerj2

As I moved from one page to another, the hintbox was always there to answer most of my questions, or send me to the instructions to authors – again, with a really nice and intuitive layout.

peerj3

I never found myself second guessing what it is what I needed to do, or how to do it. And for that PeerJ deserves a hat tip.

But one of the things that impressed me the most, were the requirements under the “Declarations” section. There are a lot of things there that impressed me. Firstly, the detailed description of the Animal Ethics (not just that your University Committee approved it), the request for agreement for people to be acknowledged, the declaration of conflict of interest and any type of funding, etc. I think this is a good thing. I found it tedious at first. But when I started thinking about it more, I think this is a great step for better scientific standards. And I hope they keep on having those requirements, and hope more journals follow suit. And a second hat tip for contacting all of the listed authors to inform them someone has submitted a manuscript with their name on it. I am still shocked some journals still do not do this!

peerj4

I am now acting as an academic editor for another manuscript, and the experience from that end is no different. The system is simple and intuitive which makes my job easier. From an editor’s point of view what I liked the most was the page where I had to choose/load reviewers. I had on that page a list of suggested reviewers by the authors and those that authors opposed, so there was no need of navigating different windows to get that information. Made a mistake and want to get rid of a reviewer? Just click on the trash can. On that page, also nicely visible are the links to tools to help me find reviewers (JANE, PubMed and Google Scholar). Now what was a really nice touch (lke the links weren’t enough!) was that clicking on any of those links automatically ran a query for me based on title and keywords of the article – one less thing for me to do (unless I need to for some reason). So another hat tip for that – and I think that rounds up the hat trick.

Now, what a bright idea – make the system user friendly! You’d think those in the Science Publishing system would have already figured that out, eh?

__________________________________

*http://todayinsci.com/12/12_03.htm

Tagged with: , ,

Science gone bad

Posted in Science, Science and Society by kubke on October 5, 2013

or the day after the sting

I got the embargoed copy of Science Magazine article on peer review in Open Access earlier this week, which gave me a chance to read it with tranquility. I have to say I really liked it. It was a cool sting, and it exposed many of the flaws in the peer review system. And it did that quite well. There was a high rate of acceptance of a piece of work that did not deserve to see the light.  I also immediately reacted to the fact that the sting had only used Open Access journals – cognizant of how that could be misconstrued as a failure of Open Access and detracting from the real issue, which is peer review.

I had enough time to write a blog post, and was lucky enough to be able to link to Michael Eisens’ take on the issue before I posted, so I did not need to get into the nitty gritty of why the take from the sting had to be taken for nothing more than what it was – an anecdotal set of events. Because what it was not, is a scientific study.

One of the things that I found valuable from the sting (or at least my take-home message) was that there is enough information out there to help researchers navigate the Open Access publishing landscape they are so scared of and provided some information on how to choose good journals. The excuse that there are too many predatory journals to justify not publishing in Open Access is now made weaker. It also provided all of us with an opportunity to reflect on the failures of peer review and the value of the traditional publication system.

Or so I thought.

Then the embargo was lifted, and I have been picking up  brain bits spilled over twitter, blogs and other social media as the tsunami of heads exploding started. And as the morning alarm clocks went off as the sun rose in different  time zones, new waves of brain bits came along.

By now, I could look at the entire ‘special issue’ and what else was in it.  Here  is where I see the problem.

There were lots of articles talking about science communication. Not one of them could I find (please someone correct me if I am wrong!)  that took on the sting to refocus the discussion in the right direction (that is, peer review), nor to reflect on how Science and the AAAS behind it measure up to those issues they so readily seemed to criticise.

I never liked the AAAS – or rather I began disliking it after  I got my first invitation to join in the late 1980’s. It seemed that all I needed to do to become a member was send them cash. There was no reason to do that – since obviously, without requiring anyone to endorse me as a “proper scientist” I could not see what that membership said about me other than having the ability to write a check. I was already doing that with the New York Times, and if I couldn’t put that down in my CV, then neither could I put down my membership with AAAS. Nothing gained, nothing lost, move on.

What I didn’t know back at that time, was that that first letter would be the first in a long (long!) series of identical invitations that would periodically arrive in my mailbox where they were be quickly disposed of in the rubbish bin in the corner of the room. I am sure one would be able to find plenty of those in the world’s landfills.

“The vitality of the scientific meeting has given rise to a troubling cottage industry: meetings held more for profit than enlightenment.” (Stone, R., & Jasny, B.)

Wut? Let’s apply the same logic to the AAAS membership – Would we consider that predatory behaviour too?

Let’s move on to peer review.

Moving back to the sting. Yes, they sent a lot of articles out. The article in science seems to me to be delivered from a very high horse, and one with no legs to stand on. Their N is large (perhaps not large enough, but that is beyond the point).  Because to each journal they just sent one (n=1; “en equal one”) hoax paper (singular, not plural). I may ask – had they sent say 10 hoax papers to each journal, would each journal have accepted the 10, only 5 or perhaps only 1? Because that makes a difference at the individual journal level. If we are going to accept that such n=1 is enough to make any informed conclusion about whether a journal is predatory or not, then, well, arsenic life. ‘Nuff said.

Let’s take a second look at the arsenic paper. n=1. The arsenic paper was so bad that poor Michael Eisen’s head exploded because readers of his blog actually believed he had sent it in as a hoax – I myself even got caught doing a double-take when I started reading his blog post (but I kept on reading!). That’ll teach him for being such a convincing writer.

So, if n=1 is enough, does that mean Science magazine is ready to add their name to the list of journals that don’t meet the mark? I could not, on their issue, find any reflection on that (please someone correct me if I am wrong!).

… and to open access

But the bigger issue in my view was what appears to be a position of Science on Open Access. Now Science is not Nature. Science is the flagship journal of AAAS. AAAS says it is an organisation  “serving science, service society”. Here are some of their mission bullet points:

Enhance communication among scientists, engineers, and the public;
Promote and defend the integrity of science and its use;
Foster education in science and technology for everyone;
Increase public engagement with science and technology; and

How is any of this better served by having their flagship magazine behind a paywall?

Can they support, through scientific data, that having their flagship journal behind a paywall helps achieve any of those goals? Now those are data I would love to see. Because their “special issue” ‘s biased criticism (please someone correct me if I am wrong!) of Open Access seems to suggest so. Now, if they can’t provide a scientific argument as to why we should give them so much money to be members or access their publication, then how are they any different from the “cottage industry” they seem so ready to criticize? Is preying on libraries or readers less bad than on authors? If I purchase a “pay per view” article and don’t like it, or it does not contain the data promised by the abstract, do I get my money back? Or do these paywalled journals just take the money and run? Because, as much as I dislike the predatory open access journals, at least they are putting the papers out there so that we can all croudsource on how much crap they are.

Do I find an issue with they bringing to the attention of their readership the troubled state of the publishing industry? No.

Do I find an issue with some of the articles in the special issue focusing on some of the naughty players in the Open Access landscape? No.

What I do have a problem with, is the apparent lack of reflection on Science’s and AAAS’ own practices (please someone correct me if I am wrong!).

There was an opportunity to step up, and that opportunity was missed. Science might have a shiny coat of wool decorated with double digit impact factors, but I am not buying it.

I am sticking with the New York Times.

(Full disclosure: I am an academic editor for PLOS ONE and PeerJ and the Chair of the Advisory Panel of Creative Commons Aotearoa New Zealand. The views expressed here are purely my own.)

[Updated Oct 5 1:19 to add missing link]

Tagged with: ,

Predatoromics of science communication

Posted in Science, Science and Society by kubke on October 4, 2013

CC-BY mjtmail (tiggy) on Flickr

The week ends with a series of articles in Science that make you roll your eyes. These articles explore different aspects of the landscape of science communication exposing how broken the system can be at times. The increased pressure to publish scientific results to satisfy some assessors’ need to count beans has not come without a heavy demand on the scientific community that inevitably becomes involved through free editorial and peer review services. For every paper that is published, there are a number of other scientists that take time off their daily work to contribute to the decision of whether the article should be published or not, in principle by assessing the scientific rigor and quality. In many cases, and unless the article is accepted by the first journal it is submitted to, this cycle is repeated. Over. And over. Again. The manuscript is submitted to a new journal, handled by a new editor and most probably reviewed by a new set of peers, this iterated as many times as needed until a journals takes the paper in. And then comes the back and forth of the revision process, modifications to the original article suggested or required through the peer review, until eventually the manuscript is published. Somewhere. Number of beans = n+1. Good on’ya!

But what is the cost?

CC-BY Jessica M Cross on Flickr

There just doesn’t seem to be enough time to go this process with the level of rigor it promises to deliver. The rise in multidisciplinary research means that it will be unlikely that a single reviewer can assess the entirety of a manuscript. The feedback we get as editors (or we provide as reviewers) can often be incomplete and miss fundamental scientific flaws. There are pressures to publish and to publish a lot and to do that (and still have something to publish about) we are tempted to minimise the amount of time that we spend in the publication cycle. Marcia McNutt says it in a nutshell [1]:

For science professionals, time is a very precious commodity.

It is then not surprising that the exhaustion of the scientific community would be exploited with the ‘fast food’ equivalent of scientific communication.

The vitality of the scientific meeting has given rise to a troubling cottage industry: meetings held more for profit than enlightenment  [2]

The same applies to some so-called scientific journals. These “predatory” practices as they have come to be known are exhausting.

Science published today the description of a carefully planned sting. Jon Bohannon created a spoof paper that he sent to a long list of Open Access journals [3]. The paper should have been rejected had anyone cared enough to assess the quality of the science and base their decision on that. Instead, the manuscript made it through and got accepted in a number of journals (98 journals rejected it, 157 accepted it). That the paper got accepted in more than one journal did not come as a surprise, but what where it got interesting to me was when he compared those accepting journals against Beall’s predatory journal list. Jeff Beall helps collate a list of predatory Open Access journals, which at least saves us from having to do even more research when trying to decide where to publish our results or what conferences we might want to attend.

 Like Batman, Beall is mistrusted by many of those he aims to protect. “What he’s doing is extremely valuable,” says Paul Ginsparg, a physicist at Cornell University who founded arXiv, the preprint server that has become a key publishing platform for many areas of physics. “But he’s a little bit too trigger-happy. [3]

What Bohannon’s experiment showed was that 82% of the publishers from Beall’s list that received the spoof paper accepted it for publication. There is no excuse to falling prey to these journals and conferences. “I didn’t know” just won’t cut it for much longer.

As Michael Eisen discusses, even though Bohannon used open access journals for his experiment, this lack of rigour seems to ignore paywalls, impact factors and journal prestige. Which raises the following question:

If the system is so broke, it costs so much money in subscriptions and publication fees and sucks so much out of our productive time – then why on earth should we bother?

Don’t get me wrong – sharing our findings is important. But does it all really have to be peer reviewed from the start? Take Mat Todd’s approach, for example, from the Open Source Malaria project. All the science is out there as soon as it comes out of the pipette tip. When I asked him how this changed the way his research cycle worked this is what he said:

We have been focusing on the data and getting the project going, so we have not rushed to get the paper out. The paper is crucial but it is not the all and all. The process has been reversed, we first share the data and all the details of the project as it’s going, then when we have finished the project we move to publishing.

Right. Isn’t this what we should all be doing? I didn’t see Mat Todd’s world collapse. There is plenty of opportunity to provide peer review on the project as it is moving forward. There is no incentive to write the paper immediately, because the information is out there. There is no need to take up time from journal editors and reviewers because the format of the project offers itself to peer review from anyone who is interested in helping get this right.

PeerJ offers a preprint publication service:

“By using this service, authors establish precedent; they can solicit feedback, and they can work on revisions of their manuscript. Once they are ready, they can submit their PrePrint manuscript into the peer reviewed PeerJ journal (although it is not a requirement to do so)”

F1000 Research does something similar:

“F1000Research publishes all submitted research articles rapidly […] making the new research findings open for scrutiny by all who want to read them. This publication then triggers a structured process of post-publication peer review […]”

So yes, you can put your manuscript out there, let peers review it at their leisure, when they actually care and when they have time and focus to actually do a good job. There is really no hurry to move the manuscript to the peer-reviewed journal (PeerJ or any other) because you have already communicated your results, so you might as well go get an experiment done.  And if, as a reviewer, you want any credit for your contribution, then you can go to Publons where you can write your review, and if the community thinks you are providing valuable feedback you will be properly rewarded in the form of a DOI. Try to get that kind of recognition from most journals.

But let’s say you are so busy actually getting science done, then you always have FigShare.

“…a repository where users can make all of their research outputs available in a citable, shareable and discoverable manner.”

Because, let’s be honest, other than the bean counters who else is really caring enough about what we publish to justify the amount of nonsense that goes with it?

According to ImpactStory, 20% of the items that were indexed by Web of Science in 2010 received 4 or less PubMed Central citations. So, 4 citations in almost 3 years puts yo at the top 20%.

So my question is: Is this nonsense really worth our time?

CC-BY aussiegall on Flickr

[1] McNutt, M. (2013). Improving Scientific Communication. Science, 342(6154), 13–13. doi:10.1126/science.1246449

[2] Stone, R., & Jasny, B. (2013). Scientific Discourse: Buckling at the Seams. Science, 342(6154), 56–57. doi:10.1126/science.342.6154.56

[3] Bohannon, J. (2013). Who’s Afraid of Peer Review? Science, 342(6154), 60–65. doi:10.1126/science.342.6154.60

Failure to replicate, spoiled milk and unspilled beans

Posted in Science by kubke on September 6, 2013

Try entering “failure to replicate” in a google search (or better still, let me do that for you) and you will find no shortage of hits. You can even find a reproducibility initiative. Nature has a whole set of articles on the topic. If you live in New Zealand you have probably not escaped the coverage in the news about the botulism bacteria that never was, and you might be among those puzzled about how a lab test could be so “wrong”.

Yet, for scientists working in labs, this issue is commonplace.

Most scientists will acknowledge that reproducing someone else’s published results isn’t always easy. Most will also acknowledge that there they would receive little recognition for replicating someone else’s results. They may even add that the barriers to publish negative results are also too high.  The bottom line is that there is little incentive to encourage replication, more so in a narrowing and highly competitive funding ecosystem.

However, some kind of replication happens almost on a daily basis in our labs as we adopt techniques described by  others and try to adapt them to our own studies. A lot of time and money can be wasted when the original article does not provide enough detail on the materials and methods. Sometimes authors (consciously or unconsciously) do not articulate explicitly domain-specific tacit knowledge about their procedures, something which may not be easy to resolve. But in other cases, articles just simply lack enough detail about what specific reagents were used in an experiment, like a catalog number, and this is something may be able to fix more easily.

Making explicit the experiment’s reagents would should be quite straightforward, but apparently it is not, at least according to the new study published in PeerJ*. Vasilevsky and her colleagues surveyed articles in a number of journals and from different disciplines and recorded how well documented the raw materials used in the experiments were described. In other words, could anyone, relying solely on information provided in the article, be sure they would be buying the exact same chemical?

Simple enough? Yeah, right.

What their data exposed was a rather sad state of affairs. Based on their sample they concluded that  the reporting of “unique identifiers” for laboratory materials is rather poor and they could only unambiguously identify 56% of the resources. Overall, just a little over half of the articles don’t give enough information for proper replication. Look:

Replicabitily1

But not all research papers are created equal. A breakdown by research discipline and by type of resource shows that some areas or types of reagents do better than others. Papers in immunology, for example tend to report better than papers in neuroscience.

So, could  journals for immunology be better quality or have higher standards  than the journals for neuroscience?

The authors probably knew we would ask that, and they beat us to the punch.

(Note: Apparently, the IF does not seem to matter when it comes to the quality of reporting on materials**. )

What I found particularly interesting was that whether a  journal had good guidelines on reporting didn’t seem to make much of a difference. It appears the problem is more deeply rooted and these seeping through the submission, peer review and editorial process. How come neither authors, reviewers or editors are making sure that the reporting guidelines are followed? (Which in my opinion beats the purpose of having them there in the first place!)

replicability2

I am not sure I perform myself too much above average (I must confess I am too scared to look!). As authors we may be somewhat  blind to how well (or not) we articulate our findings because we are too embedded in the work, missing things that may be obvious to others. Peer reviewers and editors tend to pick up on our blind spots much better than us. Yet apparently a lot that still does not get picked up. Peer-reviewers don’t seem to be picking up on these reporting issues, perhaps they make assumptions based on what is standard in their their particular field of work. Editors may not detect what is missing because they are relying on the peer-review process to identify reporting shortcomings especially when the work is outside their field of expertise. But while I can see how not getting it right can happen, I also see the need to get it right.

While I think all journals should have clear guidelines for reporting materials (the authors developed a set of guidelines that can be found here), Vasilevsky and her colleagues showed that having them in place was not necessarily enough. Checklists similar to those put out by Nature [pdf] to help authors, reviewers and editors might help to minimise the problem.

I would, of course, love to see this study replicated. In the meantime I might give a go at playing with the data.

*Disclosure: I am an academic editor, author and reviewer for PeerJ and obtained early access to this article.

** no, I will not go down this rabbit hole

Vasilevsky et al. (2013), On the reproducibility of science: unique identification of research resources in the biomedical literature. PeerJ 1:e148; DOI 10.7717/peerj.148

[Open] Science Sunday – 19-5-13

Posted in Science, Science and Society by kubke on May 19, 2013

2012 was a really interesting year for Open Research.

The year started with a boycott to Elsevier (The Cost of Knowledge) , soon followed  in May by a petition at We The People in the US,  asking the US government to “Require free access over the Internet to scientific journal articles arising from taxpayer-funded research.”. By June we had The Royal Society publishing  a paper on “science as an open enterprise” [pdf]  saying:

The opportunities of intelligently open research data are exemplified in a number of areas of science.With these experiences as a guide, this report argues that it is timely to accelerate and coordinate change, but in ways that are adapted to the diversity of the scientific enterprise and the interests of: scientists, their institutions, those that fund, publish and use their work and the public.

The Finch report had a large share of media coverage [pdf]   -

Our key conclusion, therefore, is that a clear policy direction should be set to support the publication of research results in open access or hybrid journals funded by APCs. A clear policy direction of that kind from Government, the Funding Councils and the Research Councils would have a major effect in stimulating, guiding and accelerating the shift to open access.

By July the UK government announced the support for the Open Access recommendations from the Finch Report to ensure:

Walk-in rights for the general public, so they can have free access to global research publications owned by members of the UK Publishers’ Association, via public libraries. [and] Extending the licensing of access enjoyed by universities to high technology businesses for a modest charge.

The Research Councils OK joined by publishing a policy on OA (recently updated) that required [pdf] :

Where the RCUK OA block  grant is used to pay Article Processing Charges for a paper, the paper must  be made Open Accesess immediately at  the time of on line publication, using the Creative Commons Attribution (CC BY) licence.

Open Access Definition Cards and Buttons

CC-BY-NC-SA Jen Waller on Flickr

By the time that Open Access Week came around, there was plenty to discuss. The discussion of Open Access emphasised more strongly the re-use licences under which the work was published. The discussion also included some previous analysis showing that there are benefits from publishing in Open Access that affect economies:

adopting this model could lead to annual savings of around EUR 70 million in Denmark, EUR 133 in The Netherlands and EUR 480 million in the UK.

And in November, the New Zealand Open Source Awards recognised Open Science fro the first time too.

2013 promises not to fall behind

This year offers good opportunities to celebrate local and international advocates of Open Science.

The Obama administration not only responded to last year’s petition by issuing a memorandum geared towards making Federally funded research adopt open access policies, but is now also seeking “Outstanding Open Science Champions of Change” . Nominations for this close on May 14, 2013.  Simultaneously, The Public Library of Science, Google and the Wellcome Trust , together with a number of allies are sponsoring the “Accelerating Science Award Program” which seeks to recognise and reward individuals, groups or projects that have used Open Access scientific works in innovative manners. The deadline for this award is June 15.

Last year Peter Griffin  wrote:

The policy shift in the UK will open up access to the work of New Zealand scientists by default as New Zealanders are regularly co-authors on papers paid for by UK Research Councils funds. But hopefully it will also lead to some introspection about our own open access policies here.

There was some reflection at the NZAU Open Research Conference which led to the Tasman Declaration – (which I encourage you to sign) and those of us who were involved in it are hoping good things will come out of it. While that work continues, I will be revisiting the nominations of last years Open Science category for the NZ Open Source Awards to make my nominations for the two awards mentioned above.

I certainly look forward to this year – I will continue to work closely with Creative Commons Aotearoa New Zealand and with NZ AU Open Research to make things happen, and continue to put my 2 cents as an Academic Editor for PLOS ONE and PeerJ.

There is no question that the voice of Open Access is now loud and clear – and over the last year it has also become a voice that is not only being heard, but that it also generating the kinds of responses that will lead to real change.

Follow

Get every new post delivered to your Inbox.

Join 63 other followers