What do brain machine interfaces and Open Science have in common?
They are two examples of concepts that I never thought I would get to see materialised in my lifetime. I was wrong.
I had heard of the idea of Open Access as Public Library of Science was about to launch (or was in its early infancy) . It was about that time that I moved to New Zealand and was not able to go to conferences as frequently as I did in the USA, and couldn’t afford having an internet connection at home. Email communication (especially when limited to work hours) does not promote the same kind of chitter-chatter you might have as you wait in cue for your coffee – and so my work moved along, somewhat oblivious to what was going to become a big focus for me later on: Open Science.
About 6 years ofter moving to New Zealand things changed. Over a coffee with Nat Torkington, I became aware of some examples of people working in science embracing a more open attitude. This conversation had a big impact on me. Someone whom I never met before described me a whole different way of doing science. This resonated (strongly) because what he described were the ideals I had at the start of my journey; ideals that were slowly eroded by the demands of the system around me. By 2009 I had found a strong group of people internationally that were working to make this happen, and who inspired me to try to do something locally. And the rest is history.
What resonated with me about “Open Science” is the notion that knowledge is not ours to keep – that it belongs in the public domain where it can be a driver for change. I went to a free of fees University and we fought hard to keep it that way. Knowledge was a right and sharing knowledge was our duty. I moved along my career in parallel with shrinking funding pots and a trend towards academic commodification. The publish or perish mentality, the fears of being back-stabbed if one shares to early or too often, the idea of the research article placed in the “well-branded” journal, and the “paper” as a measure of one’s worth as a scientist all conspire to detract us from exploring open collaborative spaces. The world I walked into around 2009 was seeking to do away with all this nonsense. I have tried to listen and learn as much as I can, sometimes I even dared to put in my 2 cents or ask questions.
How to make it happen?
The biggest hurdle I have found is that I don’t do my work in isolation. As much as I might want to embrace Open Science, when the work is collaborative I am not the one that makes the final call. In a country as small as New Zealand it is difficult to find the critical mass at the intersection of my research interests (and knowledge) and the desire to do work in the open space. If you want to collaborate with the best, you may not be able to be picky on the shared ethos. This is particularly true for those struggling with building a career and getting a permanent position, the advice of those at the hiring table will always sound louder.
The reward system seems at times to be stuck in a place where incentives are (at all levels) stacked against Open Science; “rewards” are distributed at the “researcher” level. Open Research is about a solution to a problem, not to someone’s career advancement (although that should come as a side-effect). It is not surprising then how little value is placed in whether one’s science can be replicated or re-used. Once the paper is out and the bean drops in the jar, our work is done. I doubt that even staffing committees or those evaluating us will even care about pulling those research outputs and reading them to assess their value – if they did we would not need to have things like Impact Factors, h-index and the rest. And here is the irony – we struggle to brand our papers to satisfy a rewards system that will never look beyond its title. At the same time those who care about the content and want to reuse it are limited by whichever restrictions we chose to put at the time of publishing.
So what do we do?
I think we need to be sensitive to the struggle of those that might want to embrace open science, but are trying to negotiate the assessment requirements of their careers. Perhaps getting more people who embrace these principles at staffing and research University Committees might at least provide the opportunity to ask the right questions about “value” and at the right time. If we can get more open minded stances at the hiring level, this will go far in changing people’s attitudes at the bench.
I, for one, find myself in a relatively good position. My continuation was approved a few weeks ago, so I won’t need to face the staffing committee except for promotion. A change in title might be nice – but it is not a deal-breaker, like tenure. I have tried to open my workflow in the past, and learned enough from the experience, and will keep trying until I get it right. I am slowly seeing the shift in my colleagues’ attitudes – less rolling of eyes, a bit more curiosity. For now, let’s call that progress.
I came to meet in person many of those who inspired me through the online discussions since 2009, and they have always provided useful advice, but more importantly support. Turning my workflow to “Open” has been as hard as I anticipated. I have failed more than I have succeeded but always learned something from the experience. And one question that keeps me going is:
What did the public give you the money for?
The week ends with a series of articles in Science that make you roll your eyes. These articles explore different aspects of the landscape of science communication exposing how broken the system can be at times. The increased pressure to publish scientific results to satisfy some assessors’ need to count beans has not come without a heavy demand on the scientific community that inevitably becomes involved through free editorial and peer review services. For every paper that is published, there are a number of other scientists that take time off their daily work to contribute to the decision of whether the article should be published or not, in principle by assessing the scientific rigor and quality. In many cases, and unless the article is accepted by the first journal it is submitted to, this cycle is repeated. Over. And over. Again. The manuscript is submitted to a new journal, handled by a new editor and most probably reviewed by a new set of peers, this iterated as many times as needed until a journals takes the paper in. And then comes the back and forth of the revision process, modifications to the original article suggested or required through the peer review, until eventually the manuscript is published. Somewhere. Number of beans = n+1. Good on’ya!
But what is the cost?
There just doesn’t seem to be enough time to go this process with the level of rigor it promises to deliver. The rise in multidisciplinary research means that it will be unlikely that a single reviewer can assess the entirety of a manuscript. The feedback we get as editors (or we provide as reviewers) can often be incomplete and miss fundamental scientific flaws. There are pressures to publish and to publish a lot and to do that (and still have something to publish about) we are tempted to minimise the amount of time that we spend in the publication cycle. Marcia McNutt says it in a nutshell :
For science professionals, time is a very precious commodity.
It is then not surprising that the exhaustion of the scientific community would be exploited with the ‘fast food’ equivalent of scientific communication.
The vitality of the scientific meeting has given rise to a troubling cottage industry: meetings held more for profit than enlightenment 
The same applies to some so-called scientific journals. These “predatory” practices as they have come to be known are exhausting.
Science published today the description of a carefully planned sting. Jon Bohannon created a spoof paper that he sent to a long list of Open Access journals . The paper should have been rejected had anyone cared enough to assess the quality of the science and base their decision on that. Instead, the manuscript made it through and got accepted in a number of journals (98 journals rejected it, 157 accepted it). That the paper got accepted in more than one journal did not come as a surprise, but what where it got interesting to me was when he compared those accepting journals against Beall’s predatory journal list. Jeff Beall helps collate a list of predatory Open Access journals, which at least saves us from having to do even more research when trying to decide where to publish our results or what conferences we might want to attend.
Like Batman, Beall is mistrusted by many of those he aims to protect. “What he’s doing is extremely valuable,” says Paul Ginsparg, a physicist at Cornell University who founded arXiv, the preprint server that has become a key publishing platform for many areas of physics. “But he’s a little bit too trigger-happy. 
What Bohannon’s experiment showed was that 82% of the publishers from Beall’s list that received the spoof paper accepted it for publication. There is no excuse to falling prey to these journals and conferences. “I didn’t know” just won’t cut it for much longer.
As Michael Eisen discusses, even though Bohannon used open access journals for his experiment, this lack of rigour seems to ignore paywalls, impact factors and journal prestige. Which raises the following question:
If the system is so broke, it costs so much money in subscriptions and publication fees and sucks so much out of our productive time – then why on earth should we bother?
Don’t get me wrong – sharing our findings is important. But does it all really have to be peer reviewed from the start? Take Mat Todd’s approach, for example, from the Open Source Malaria project. All the science is out there as soon as it comes out of the pipette tip. When I asked him how this changed the way his research cycle worked this is what he said:
We have been focusing on the data and getting the project going, so we have not rushed to get the paper out. The paper is crucial but it is not the all and all. The process has been reversed, we first share the data and all the details of the project as it’s going, then when we have finished the project we move to publishing.
Right. Isn’t this what we should all be doing? I didn’t see Mat Todd’s world collapse. There is plenty of opportunity to provide peer review on the project as it is moving forward. There is no incentive to write the paper immediately, because the information is out there. There is no need to take up time from journal editors and reviewers because the format of the project offers itself to peer review from anyone who is interested in helping get this right.
PeerJ offers a preprint publication service:
“By using this service, authors establish precedent; they can solicit feedback, and they can work on revisions of their manuscript. Once they are ready, they can submit their PrePrint manuscript into the peer reviewed PeerJ journal (although it is not a requirement to do so)”
F1000 Research does something similar:
“F1000Research publishes all submitted research articles rapidly […] making the new research findings open for scrutiny by all who want to read them. This publication then triggers a structured process of post-publication peer review […]”
So yes, you can put your manuscript out there, let peers review it at their leisure, when they actually care and when they have time and focus to actually do a good job. There is really no hurry to move the manuscript to the peer-reviewed journal (PeerJ or any other) because you have already communicated your results, so you might as well go get an experiment done. And if, as a reviewer, you want any credit for your contribution, then you can go to Publons where you can write your review, and if the community thinks you are providing valuable feedback you will be properly rewarded in the form of a DOI. Try to get that kind of recognition from most journals.
But let’s say you are so busy actually getting science done, then you always have FigShare.
“…a repository where users can make all of their research outputs available in a citable, shareable and discoverable manner.”
Because, let’s be honest, other than the bean counters who else is really caring enough about what we publish to justify the amount of nonsense that goes with it?
According to ImpactStory, 20% of the items that were indexed by Web of Science in 2010 received 4 or less PubMed Central citations. So, 4 citations in almost 3 years puts yo at the top 20%.
So my question is: Is this nonsense really worth our time?
 McNutt, M. (2013). Improving Scientific Communication. Science, 342(6154), 13–13. doi:10.1126/science.1246449
 Stone, R., & Jasny, B. (2013). Scientific Discourse: Buckling at the Seams. Science, 342(6154), 56–57. doi:10.1126/science.342.6154.56
 Bohannon, J. (2013). Who’s Afraid of Peer Review? Science, 342(6154), 60–65. doi:10.1126/science.342.6154.60
(Cross-posted from Mind the Brain)
Earlier this year, nominations opened for the Accelerating Science Awards Program (ASAP). Backed by major sponsors like Google, PLOS and the Wellcome Trust, and a number of other organisations, this award seeks to “build awareness and encourage the use of scientific research — published through Open Access — in transformative ways.” From their website:
The Accelerating Science Award Program (ASAP) recognizes individuals who have applied scientific research – published through Open Access – to innovate in any field and benefit society.
The list of finalists is impressive, as is the work they have been doing taking advantage of Open Access research results. I am sure the judges did not have an easy job. How does one choose the winners?
In the end, this has been the promise of Open Access: that once the information is put out there it will be used beyond its original purpose, in innovative ways. From the use of cell phone apps to help diagnose HIV in low income communities, to using mobile phones as microscopes in education, to helping cure malaria, the finalists are a group of people that the Open Access movement should feel proud of. They represent everything we believed that could be achieved when the barriers to access to scientific information were lowered to just access to the internet.
The finalists have exploited Open Access in a variety of ways, and I was pleased to see a few familiar names in the finalists list. I spoke to three of the finalists, and you can read what Mat Todd, Daniel Mietchen and Mark Costello had to say elsewhere.
One of the finalist is Mat Todd from University of Sydney, whose work I have stalked for a while now. Mat has been working on an open source approach to drug discovery for malaria. His approach goes against everything we are always told: that unless one patents one’s discovery there are no chances that the findings will be commercialised to market a pharmaceutical product. For those naysayers out there, take a second look here.
A different approach to fighting disease was led by Nikita Pant Pai, Caroline Vadnais, Roni Deli-Houssein and Sushmita Shivkumar tackling HIV. They developed a smartphone app to help circumvent the need to go to a clinic to get an HIV test avoiding the possible discrimination that may come with it. But with the ability to test for HIV with home testing, then what was needed was a way to provide people with the information and support that would normally be provided face to face. Smartphones are increasingly becoming a tool that healthcare is exploring and exploiting. The hope is that HIV infection rates could be reduced by diminishing the number of infected people that are unaware of their condition.
What happens when different researchers from different parts of the world use different names for the same species? This is an issue that Mark Costello came across – and decided to do something about it. What he did was become part of the WoRMS project – a database that collects the knowledge of individual species. The site receives about 90,000 visitors per month. The data in the WoRMS database is curated and available under CC-BY. You can read more about Mark Costello here.
We’ve all heard about ecotourism. For it to work, it needs to go hand in hand with conservation. But how do you calculate the value (in terms of revenue) that you can put on a species based on ecotourism? This is what Ralf Buckley, Guy Castley, Clare Morrison, Alexa Mossaz, Fernanda de Vasconcellos Pegas, Clay Alan Simpkins and Rochelle Steven decided to calculate. Using data that was freely available they were able to calculate to what extent the populations of threatened species were dependent on money that came from ecotourism. This provides local organisations the information they need to meet their conservation targets within a viable revenue model.
Many research papers are rich in multimedia – but many times these multimedia files are published in the “supplementary” section of the article (yes – that part that we don’t tend to pay much attention to!). These multimedia files, when published under open access, offer the opportunity to exploit them in broader contexts, such as to illustrate Wikipedia pages. That is what Daniel Mietchen, Raphael Wimmer and Nils Dagsson Moskopp set out to do. They created a bot called Open Access Media Importer (OAMI) that harvests the multimedia files from articles in PubMed Central. The bot also uploaded these files to Wikimedia Commons, where they now illustrate more than 135 Wikipedia pages. You can read more about it here.
Saber Iftekhar Khan, Eva Schmid and Oliver Hoeller were nominated for developing a low weight microscope that uses the camera of a smartphone. The microscope is relatively small, and many of its parts are printed on a 3D printer. For teaching purposes it has two advantages. Firstly, it is mobile, which means that you can go hiking with your class and discover the world that lives beyond your eyesight. Secondly, because the image of the specimen is seen through the camera function on your phone or ipod, several students can look at an image at the same time, which, as anyone who teaches knows, is a major plus. To do this with standard microscopes would cost a lot of money in specialised cameras and monitors. Being able to do this at a relative low cost can provide students with a way of engaging with science that may be completely different from what they were offered before.
Three top awards will be announced at the beginning of Open Access Week on October 21st. Good luck to all!
2012 was a really interesting year for Open Research.
The year started with a boycott to Elsevier (The Cost of Knowledge) , soon followed in May by a petition at We The People in the US, asking the US government to “Require free access over the Internet to scientific journal articles arising from taxpayer-funded research.”. By June we had The Royal Society publishing a paper on “science as an open enterprise” [pdf] saying:
The opportunities of intelligently open research data are exemplified in a number of areas of science.With these experiences as a guide, this report argues that it is timely to accelerate and coordinate change, but in ways that are adapted to the diversity of the scientific enterprise and the interests of: scientists, their institutions, those that fund, publish and use their work and the public.
The Finch report had a large share of media coverage [pdf] -
Our key conclusion, therefore, is that a clear policy direction should be set to support the publication of research results in open access or hybrid journals funded by APCs. A clear policy direction of that kind from Government, the Funding Councils and the Research Councils would have a major effect in stimulating, guiding and accelerating the shift to open access.
By July the UK government announced the support for the Open Access recommendations from the Finch Report to ensure:
Walk-in rights for the general public, so they can have free access to global research publications owned by members of the UK Publishers’ Association, via public libraries. [and] Extending the licensing of access enjoyed by universities to high technology businesses for a modest charge.
The Research Councils OK joined by publishing a policy on OA (recently updated) that required [pdf] :
Where the RCUK OA block grant is used to pay Article Processing Charges for a paper, the paper must be made Open Accesess immediately at the time of on line publication, using the Creative Commons Attribution (CC BY) licence.
By the time that Open Access Week came around, there was plenty to discuss. The discussion of Open Access emphasised more strongly the re-use licences under which the work was published. The discussion also included some previous analysis showing that there are benefits from publishing in Open Access that affect economies:
adopting this model could lead to annual savings of around EUR 70 million in Denmark, EUR 133 in The Netherlands and EUR 480 million in the UK.
And in November, the New Zealand Open Source Awards recognised Open Science fro the first time too.
2013 promises not to fall behind
This year offers good opportunities to celebrate local and international advocates of Open Science.
The Obama administration not only responded to last year’s petition by issuing a memorandum geared towards making Federally funded research adopt open access policies, but is now also seeking “Outstanding Open Science Champions of Change” . Nominations for this close on May 14, 2013. Simultaneously, The Public Library of Science, Google and the Wellcome Trust , together with a number of allies are sponsoring the “Accelerating Science Award Program” which seeks to recognise and reward individuals, groups or projects that have used Open Access scientific works in innovative manners. The deadline for this award is June 15.
Last year Peter Griffin wrote:
The policy shift in the UK will open up access to the work of New Zealand scientists by default as New Zealanders are regularly co-authors on papers paid for by UK Research Councils funds. But hopefully it will also lead to some introspection about our own open access policies here.
There was some reflection at the NZAU Open Research Conference which led to the Tasman Declaration – (which I encourage you to sign) and those of us who were involved in it are hoping good things will come out of it. While that work continues, I will be revisiting the nominations of last years Open Science category for the NZ Open Source Awards to make my nominations for the two awards mentioned above.
I certainly look forward to this year – I will continue to work closely with Creative Commons Aotearoa New Zealand and with NZ AU Open Research to make things happen, and continue to put my 2 cents as an Academic Editor for PLOS ONE and PeerJ.
There is no question that the voice of Open Access is now loud and clear – and over the last year it has also become a voice that is not only being heard, but that it also generating the kinds of responses that will lead to real change.
It has been a busy Open Access Week for me. My last (well almost last!) duty is today at 4:00 pm at the Old Government House at the University of Auckland.
Stratus has organised a panel and invited me to participate, and I have just uploaded my upcoming presentation to Slideshare. If you have a chance, we would love to see you there!