We put a man on the moon about half a century ago yet we still haven’t solved the problem of access to the scientific literature.
I was invited to speak at the New Zealand Association of Scientists meeting this year. The theme was “Science and Society” and I was asked to speak about Open Access from that perspective.
The timing was really good. Lincoln University published their Open Access Policy last year, Waikato University released their Open Access mandate a couple of weeks ago, and the University of Auckland is examining their position around Open Access. New Zealand is catching up.
I opened my talk by referring to the New Zealand Education Act which outlines the role of univeristies:
“…a university is characterised by a wide diversity of teaching and research, especially at a higher level, that maintains, advances, disseminates, and assists the application of, knowledge, develops intellectual independence, and promotes community learning”
[New Zealand Education Act (1989) Section162.4.b.iii] (emphasis mine)
I argued that those values could be best met by making the research outputs available under Open Access as defined by by the Budapest Open Access Initiative, that is, not limited to “access” but equally importantly, allowing re-use.
After summarising the elements of the Creative Commons licences that can support Open Access publishing, I invited the audience to have an open conversation with their communities of practice to examine what values each place on how to share the results of our work.
My position is that the more broadly we disseminate our findings the more likely we will achieve the goals set out by the NZ Education Act to maintain, advance, assist in the application of knowledge, develop intellectual independence and promote community learning. I am also of the position that this is what should be rewarded in academic circles. I think that. as a community , we should move away from looking for value in the branding of the research article (i.e., where it is published) and focus instead on measuring the actual quality and impact of the research within and outside the academic community.
How do we measure quality and impact?
At times I feel we have we become lazy. We often stick to using impact factor as a proxy for quality instead of interrogating the research outputs to understand their contribution and impact. Impact factor may be an easy metric – but it is not one that measures in any way the quality or impact of an individual article, let alone of the researchers who authored it. It is just an easy way out, a number we can quickly look at so we can tick the right box. As a metric it is easy, quick and objective. As a metric of value of an individual piece of work it is also useless and, because of that, it inevitably lacks fairness in research assessment.
What does this have to do with OA?
By the end of the conference I couldn’t shake the thought that the barriers to Open Access may not be financial and the costs of publication fees may be the least of our problems. (This issue of cost just keeps coming up.) I can’t but wonder if the cost Open Access might just be a red herring that lets us avoid the real (and bigger) issue: quality assessment. Open Access may help our articles have a wider reach but, except for a few titles, Open Access journals are not recognisable brands. If we are forced to stop looking at the “journal brand” we will be forced to assess the individual articles for their intrinsic value and impact. And, although it may lead to better, more valid, assessment, it is also a big and difficult job.
A lot of what was said today at the conference revolved about the value of New Zealand science (and scientists) to society and the importance of science communication. We spoke about the importance of evidence-based policy, the need to be the critic and conscious of society and the challenges of working with the public to build a trust in scientific evidence despite its uncertainties. We expect politicians and society to do the hard job of making decisions based on evidence. I couldn’t help but ask whether we, as a community of scientists, can live up to those standards.
Can we ditch the bad and easy for the good and hard?
We put a man on the moon. Solving the issues around open access and research assessment must certainly be easier to solve. Are we ready to put our money where our mouth is?
What do brain machine interfaces and Open Science have in common?
They are two examples of concepts that I never thought I would get to see materialised in my lifetime. I was wrong.
I had heard of the idea of Open Access as Public Library of Science was about to launch (or was in its early infancy) . It was about that time that I moved to New Zealand and was not able to go to conferences as frequently as I did in the USA, and couldn’t afford having an internet connection at home. Email communication (especially when limited to work hours) does not promote the same kind of chitter-chatter you might have as you wait in cue for your coffee – and so my work moved along, somewhat oblivious to what was going to become a big focus for me later on: Open Science.
About 6 years ofter moving to New Zealand things changed. Over a coffee with Nat Torkington, I became aware of some examples of people working in science embracing a more open attitude. This conversation had a big impact on me. Someone whom I never met before described me a whole different way of doing science. This resonated (strongly) because what he described were the ideals I had at the start of my journey; ideals that were slowly eroded by the demands of the system around me. By 2009 I had found a strong group of people internationally that were working to make this happen, and who inspired me to try to do something locally. And the rest is history.
What resonated with me about “Open Science” is the notion that knowledge is not ours to keep – that it belongs in the public domain where it can be a driver for change. I went to a free of fees University and we fought hard to keep it that way. Knowledge was a right and sharing knowledge was our duty. I moved along my career in parallel with shrinking funding pots and a trend towards academic commodification. The publish or perish mentality, the fears of being back-stabbed if one shares to early or too often, the idea of the research article placed in the “well-branded” journal, and the “paper” as a measure of one’s worth as a scientist all conspire to detract us from exploring open collaborative spaces. The world I walked into around 2009 was seeking to do away with all this nonsense. I have tried to listen and learn as much as I can, sometimes I even dared to put in my 2 cents or ask questions.
How to make it happen?
The biggest hurdle I have found is that I don’t do my work in isolation. As much as I might want to embrace Open Science, when the work is collaborative I am not the one that makes the final call. In a country as small as New Zealand it is difficult to find the critical mass at the intersection of my research interests (and knowledge) and the desire to do work in the open space. If you want to collaborate with the best, you may not be able to be picky on the shared ethos. This is particularly true for those struggling with building a career and getting a permanent position, the advice of those at the hiring table will always sound louder.
The reward system seems at times to be stuck in a place where incentives are (at all levels) stacked against Open Science; “rewards” are distributed at the “researcher” level. Open Research is about a solution to a problem, not to someone’s career advancement (although that should come as a side-effect). It is not surprising then how little value is placed in whether one’s science can be replicated or re-used. Once the paper is out and the bean drops in the jar, our work is done. I doubt that even staffing committees or those evaluating us will even care about pulling those research outputs and reading them to assess their value – if they did we would not need to have things like Impact Factors, h-index and the rest. And here is the irony – we struggle to brand our papers to satisfy a rewards system that will never look beyond its title. At the same time those who care about the content and want to reuse it are limited by whichever restrictions we chose to put at the time of publishing.
So what do we do?
I think we need to be sensitive to the struggle of those that might want to embrace open science, but are trying to negotiate the assessment requirements of their careers. Perhaps getting more people who embrace these principles at staffing and research University Committees might at least provide the opportunity to ask the right questions about “value” and at the right time. If we can get more open minded stances at the hiring level, this will go far in changing people’s attitudes at the bench.
I, for one, find myself in a relatively good position. My continuation was approved a few weeks ago, so I won’t need to face the staffing committee except for promotion. A change in title might be nice – but it is not a deal-breaker, like tenure. I have tried to open my workflow in the past, and learned enough from the experience, and will keep trying until I get it right. I am slowly seeing the shift in my colleagues’ attitudes – less rolling of eyes, a bit more curiosity. For now, let’s call that progress.
I came to meet in person many of those who inspired me through the online discussions since 2009, and they have always provided useful advice, but more importantly support. Turning my workflow to “Open” has been as hard as I anticipated. I have failed more than I have succeeded but always learned something from the experience. And one question that keeps me going is:
What did the public give you the money for?
or the day after the sting
I got the embargoed copy of Science Magazine article on peer review in Open Access earlier this week, which gave me a chance to read it with tranquility. I have to say I really liked it. It was a cool sting, and it exposed many of the flaws in the peer review system. And it did that quite well. There was a high rate of acceptance of a piece of work that did not deserve to see the light. I also immediately reacted to the fact that the sting had only used Open Access journals – cognizant of how that could be misconstrued as a failure of Open Access and detracting from the real issue, which is peer review.
I had enough time to write a blog post, and was lucky enough to be able to link to Michael Eisens’ take on the issue before I posted, so I did not need to get into the nitty gritty of why the take from the sting had to be taken for nothing more than what it was – an anecdotal set of events. Because what it was not, is a scientific study.
One of the things that I found valuable from the sting (or at least my take-home message) was that there is enough information out there to help researchers navigate the Open Access publishing landscape they are so scared of and provided some information on how to choose good journals. The excuse that there are too many predatory journals to justify not publishing in Open Access is now made weaker. It also provided all of us with an opportunity to reflect on the failures of peer review and the value of the traditional publication system.
Or so I thought.
Then the embargo was lifted, and I have been picking up brain bits spilled over twitter, blogs and other social media as the tsunami of heads exploding started. And as the morning alarm clocks went off as the sun rose in different time zones, new waves of brain bits came along.
By now, I could look at the entire ‘special issue’ and what else was in it. Here is where I see the problem.
There were lots of articles talking about science communication. Not one of them could I find (please someone correct me if I am wrong!) that took on the sting to refocus the discussion in the right direction (that is, peer review), nor to reflect on how Science and the AAAS behind it measure up to those issues they so readily seemed to criticise.
I never liked the AAAS – or rather I began disliking it after I got my first invitation to join in the late 1980’s. It seemed that all I needed to do to become a member was send them cash. There was no reason to do that – since obviously, without requiring anyone to endorse me as a “proper scientist” I could not see what that membership said about me other than having the ability to write a check. I was already doing that with the New York Times, and if I couldn’t put that down in my CV, then neither could I put down my membership with AAAS. Nothing gained, nothing lost, move on.
What I didn’t know back at that time, was that that first letter would be the first in a long (long!) series of identical invitations that would periodically arrive in my mailbox where they were be quickly disposed of in the rubbish bin in the corner of the room. I am sure one would be able to find plenty of those in the world’s landfills.
“The vitality of the scientific meeting has given rise to a troubling cottage industry: meetings held more for profit than enlightenment.” (Stone, R., & Jasny, B.)
Wut? Let’s apply the same logic to the AAAS membership – Would we consider that predatory behaviour too?
Let’s move on to peer review.
Moving back to the sting. Yes, they sent a lot of articles out. The article in science seems to me to be delivered from a very high horse, and one with no legs to stand on. Their N is large (perhaps not large enough, but that is beyond the point). Because to each journal they just sent one (n=1; “en equal one”) hoax paper (singular, not plural). I may ask – had they sent say 10 hoax papers to each journal, would each journal have accepted the 10, only 5 or perhaps only 1? Because that makes a difference at the individual journal level. If we are going to accept that such n=1 is enough to make any informed conclusion about whether a journal is predatory or not, then, well, arsenic life. ‘Nuff said.
Let’s take a second look at the arsenic paper. n=1. The arsenic paper was so bad that poor Michael Eisen’s head exploded because readers of his blog actually believed he had sent it in as a hoax – I myself even got caught doing a double-take when I started reading his blog post (but I kept on reading!). That’ll teach him for being such a convincing writer.
So, if n=1 is enough, does that mean Science magazine is ready to add their name to the list of journals that don’t meet the mark? I could not, on their issue, find any reflection on that (please someone correct me if I am wrong!).
… and to open access
But the bigger issue in my view was what appears to be a position of Science on Open Access. Now Science is not Nature. Science is the flagship journal of AAAS. AAAS says it is an organisation “serving science, service society”. Here are some of their mission bullet points:
Enhance communication among scientists, engineers, and the public;
Promote and defend the integrity of science and its use;
Foster education in science and technology for everyone;
Increase public engagement with science and technology; and
How is any of this better served by having their flagship magazine behind a paywall?
Can they support, through scientific data, that having their flagship journal behind a paywall helps achieve any of those goals? Now those are data I would love to see. Because their “special issue” ‘s biased criticism (please someone correct me if I am wrong!) of Open Access seems to suggest so. Now, if they can’t provide a scientific argument as to why we should give them so much money to be members or access their publication, then how are they any different from the “cottage industry” they seem so ready to criticize? Is preying on libraries or readers less bad than on authors? If I purchase a “pay per view” article and don’t like it, or it does not contain the data promised by the abstract, do I get my money back? Or do these paywalled journals just take the money and run? Because, as much as I dislike the predatory open access journals, at least they are putting the papers out there so that we can all croudsource on how much crap they are.
Do I find an issue with they bringing to the attention of their readership the troubled state of the publishing industry? No.
Do I find an issue with some of the articles in the special issue focusing on some of the naughty players in the Open Access landscape? No.
What I do have a problem with, is the apparent lack of reflection on Science’s and AAAS’ own practices (please someone correct me if I am wrong!).
There was an opportunity to step up, and that opportunity was missed. Science might have a shiny coat of wool decorated with double digit impact factors, but I am not buying it.
I am sticking with the New York Times.
[Updated Oct 5 1:19 to add missing link]
The week ends with a series of articles in Science that make you roll your eyes. These articles explore different aspects of the landscape of science communication exposing how broken the system can be at times. The increased pressure to publish scientific results to satisfy some assessors’ need to count beans has not come without a heavy demand on the scientific community that inevitably becomes involved through free editorial and peer review services. For every paper that is published, there are a number of other scientists that take time off their daily work to contribute to the decision of whether the article should be published or not, in principle by assessing the scientific rigor and quality. In many cases, and unless the article is accepted by the first journal it is submitted to, this cycle is repeated. Over. And over. Again. The manuscript is submitted to a new journal, handled by a new editor and most probably reviewed by a new set of peers, this iterated as many times as needed until a journals takes the paper in. And then comes the back and forth of the revision process, modifications to the original article suggested or required through the peer review, until eventually the manuscript is published. Somewhere. Number of beans = n+1. Good on’ya!
But what is the cost?
There just doesn’t seem to be enough time to go this process with the level of rigor it promises to deliver. The rise in multidisciplinary research means that it will be unlikely that a single reviewer can assess the entirety of a manuscript. The feedback we get as editors (or we provide as reviewers) can often be incomplete and miss fundamental scientific flaws. There are pressures to publish and to publish a lot and to do that (and still have something to publish about) we are tempted to minimise the amount of time that we spend in the publication cycle. Marcia McNutt says it in a nutshell :
For science professionals, time is a very precious commodity.
It is then not surprising that the exhaustion of the scientific community would be exploited with the ‘fast food’ equivalent of scientific communication.
The vitality of the scientific meeting has given rise to a troubling cottage industry: meetings held more for profit than enlightenment 
The same applies to some so-called scientific journals. These “predatory” practices as they have come to be known are exhausting.
Science published today the description of a carefully planned sting. Jon Bohannon created a spoof paper that he sent to a long list of Open Access journals . The paper should have been rejected had anyone cared enough to assess the quality of the science and base their decision on that. Instead, the manuscript made it through and got accepted in a number of journals (98 journals rejected it, 157 accepted it). That the paper got accepted in more than one journal did not come as a surprise, but what where it got interesting to me was when he compared those accepting journals against Beall’s predatory journal list. Jeff Beall helps collate a list of predatory Open Access journals, which at least saves us from having to do even more research when trying to decide where to publish our results or what conferences we might want to attend.
Like Batman, Beall is mistrusted by many of those he aims to protect. “What he’s doing is extremely valuable,” says Paul Ginsparg, a physicist at Cornell University who founded arXiv, the preprint server that has become a key publishing platform for many areas of physics. “But he’s a little bit too trigger-happy. 
What Bohannon’s experiment showed was that 82% of the publishers from Beall’s list that received the spoof paper accepted it for publication. There is no excuse to falling prey to these journals and conferences. “I didn’t know” just won’t cut it for much longer.
As Michael Eisen discusses, even though Bohannon used open access journals for his experiment, this lack of rigour seems to ignore paywalls, impact factors and journal prestige. Which raises the following question:
If the system is so broke, it costs so much money in subscriptions and publication fees and sucks so much out of our productive time – then why on earth should we bother?
Don’t get me wrong – sharing our findings is important. But does it all really have to be peer reviewed from the start? Take Mat Todd’s approach, for example, from the Open Source Malaria project. All the science is out there as soon as it comes out of the pipette tip. When I asked him how this changed the way his research cycle worked this is what he said:
We have been focusing on the data and getting the project going, so we have not rushed to get the paper out. The paper is crucial but it is not the all and all. The process has been reversed, we first share the data and all the details of the project as it’s going, then when we have finished the project we move to publishing.
Right. Isn’t this what we should all be doing? I didn’t see Mat Todd’s world collapse. There is plenty of opportunity to provide peer review on the project as it is moving forward. There is no incentive to write the paper immediately, because the information is out there. There is no need to take up time from journal editors and reviewers because the format of the project offers itself to peer review from anyone who is interested in helping get this right.
PeerJ offers a preprint publication service:
“By using this service, authors establish precedent; they can solicit feedback, and they can work on revisions of their manuscript. Once they are ready, they can submit their PrePrint manuscript into the peer reviewed PeerJ journal (although it is not a requirement to do so)”
F1000 Research does something similar:
“F1000Research publishes all submitted research articles rapidly […] making the new research findings open for scrutiny by all who want to read them. This publication then triggers a structured process of post-publication peer review […]”
So yes, you can put your manuscript out there, let peers review it at their leisure, when they actually care and when they have time and focus to actually do a good job. There is really no hurry to move the manuscript to the peer-reviewed journal (PeerJ or any other) because you have already communicated your results, so you might as well go get an experiment done. And if, as a reviewer, you want any credit for your contribution, then you can go to Publons where you can write your review, and if the community thinks you are providing valuable feedback you will be properly rewarded in the form of a DOI. Try to get that kind of recognition from most journals.
But let’s say you are so busy actually getting science done, then you always have FigShare.
“…a repository where users can make all of their research outputs available in a citable, shareable and discoverable manner.”
Because, let’s be honest, other than the bean counters who else is really caring enough about what we publish to justify the amount of nonsense that goes with it?
According to ImpactStory, 20% of the items that were indexed by Web of Science in 2010 received 4 or less PubMed Central citations. So, 4 citations in almost 3 years puts yo at the top 20%.
So my question is: Is this nonsense really worth our time?
 McNutt, M. (2013). Improving Scientific Communication. Science, 342(6154), 13–13. doi:10.1126/science.1246449
 Stone, R., & Jasny, B. (2013). Scientific Discourse: Buckling at the Seams. Science, 342(6154), 56–57. doi:10.1126/science.342.6154.56
 Bohannon, J. (2013). Who’s Afraid of Peer Review? Science, 342(6154), 60–65. doi:10.1126/science.342.6154.60
(Cross-posted from Mind the Brain)
Earlier this year, nominations opened for the Accelerating Science Awards Program (ASAP). Backed by major sponsors like Google, PLOS and the Wellcome Trust, and a number of other organisations, this award seeks to “build awareness and encourage the use of scientific research — published through Open Access — in transformative ways.” From their website:
The Accelerating Science Award Program (ASAP) recognizes individuals who have applied scientific research – published through Open Access – to innovate in any field and benefit society.
The list of finalists is impressive, as is the work they have been doing taking advantage of Open Access research results. I am sure the judges did not have an easy job. How does one choose the winners?
In the end, this has been the promise of Open Access: that once the information is put out there it will be used beyond its original purpose, in innovative ways. From the use of cell phone apps to help diagnose HIV in low income communities, to using mobile phones as microscopes in education, to helping cure malaria, the finalists are a group of people that the Open Access movement should feel proud of. They represent everything we believed that could be achieved when the barriers to access to scientific information were lowered to just access to the internet.
The finalists have exploited Open Access in a variety of ways, and I was pleased to see a few familiar names in the finalists list. I spoke to three of the finalists, and you can read what Mat Todd, Daniel Mietchen and Mark Costello had to say elsewhere.
One of the finalist is Mat Todd from University of Sydney, whose work I have stalked for a while now. Mat has been working on an open source approach to drug discovery for malaria. His approach goes against everything we are always told: that unless one patents one’s discovery there are no chances that the findings will be commercialised to market a pharmaceutical product. For those naysayers out there, take a second look here.
A different approach to fighting disease was led by Nikita Pant Pai, Caroline Vadnais, Roni Deli-Houssein and Sushmita Shivkumar tackling HIV. They developed a smartphone app to help circumvent the need to go to a clinic to get an HIV test avoiding the possible discrimination that may come with it. But with the ability to test for HIV with home testing, then what was needed was a way to provide people with the information and support that would normally be provided face to face. Smartphones are increasingly becoming a tool that healthcare is exploring and exploiting. The hope is that HIV infection rates could be reduced by diminishing the number of infected people that are unaware of their condition.
What happens when different researchers from different parts of the world use different names for the same species? This is an issue that Mark Costello came across – and decided to do something about it. What he did was become part of the WoRMS project – a database that collects the knowledge of individual species. The site receives about 90,000 visitors per month. The data in the WoRMS database is curated and available under CC-BY. You can read more about Mark Costello here.
We’ve all heard about ecotourism. For it to work, it needs to go hand in hand with conservation. But how do you calculate the value (in terms of revenue) that you can put on a species based on ecotourism? This is what Ralf Buckley, Guy Castley, Clare Morrison, Alexa Mossaz, Fernanda de Vasconcellos Pegas, Clay Alan Simpkins and Rochelle Steven decided to calculate. Using data that was freely available they were able to calculate to what extent the populations of threatened species were dependent on money that came from ecotourism. This provides local organisations the information they need to meet their conservation targets within a viable revenue model.
Many research papers are rich in multimedia – but many times these multimedia files are published in the “supplementary” section of the article (yes – that part that we don’t tend to pay much attention to!). These multimedia files, when published under open access, offer the opportunity to exploit them in broader contexts, such as to illustrate Wikipedia pages. That is what Daniel Mietchen, Raphael Wimmer and Nils Dagsson Moskopp set out to do. They created a bot called Open Access Media Importer (OAMI) that harvests the multimedia files from articles in PubMed Central. The bot also uploaded these files to Wikimedia Commons, where they now illustrate more than 135 Wikipedia pages. You can read more about it here.
Saber Iftekhar Khan, Eva Schmid and Oliver Hoeller were nominated for developing a low weight microscope that uses the camera of a smartphone. The microscope is relatively small, and many of its parts are printed on a 3D printer. For teaching purposes it has two advantages. Firstly, it is mobile, which means that you can go hiking with your class and discover the world that lives beyond your eyesight. Secondly, because the image of the specimen is seen through the camera function on your phone or ipod, several students can look at an image at the same time, which, as anyone who teaches knows, is a major plus. To do this with standard microscopes would cost a lot of money in specialised cameras and monitors. Being able to do this at a relative low cost can provide students with a way of engaging with science that may be completely different from what they were offered before.
Three top awards will be announced at the beginning of Open Access Week on October 21st. Good luck to all!