What do brain machine interfaces and Open Science have in common?
They are two examples of concepts that I never thought I would get to see materialised in my lifetime. I was wrong.
I had heard of the idea of Open Access as Public Library of Science was about to launch (or was in its early infancy) . It was about that time that I moved to New Zealand and was not able to go to conferences as frequently as I did in the USA, and couldn’t afford having an internet connection at home. Email communication (especially when limited to work hours) does not promote the same kind of chitter-chatter you might have as you wait in cue for your coffee – and so my work moved along, somewhat oblivious to what was going to become a big focus for me later on: Open Science.
About 6 years ofter moving to New Zealand things changed. Over a coffee with Nat Torkington, I became aware of some examples of people working in science embracing a more open attitude. This conversation had a big impact on me. Someone whom I never met before described me a whole different way of doing science. This resonated (strongly) because what he described were the ideals I had at the start of my journey; ideals that were slowly eroded by the demands of the system around me. By 2009 I had found a strong group of people internationally that were working to make this happen, and who inspired me to try to do something locally. And the rest is history.
What resonated with me about “Open Science” is the notion that knowledge is not ours to keep – that it belongs in the public domain where it can be a driver for change. I went to a free of fees University and we fought hard to keep it that way. Knowledge was a right and sharing knowledge was our duty. I moved along my career in parallel with shrinking funding pots and a trend towards academic commodification. The publish or perish mentality, the fears of being back-stabbed if one shares to early or too often, the idea of the research article placed in the “well-branded” journal, and the “paper” as a measure of one’s worth as a scientist all conspire to detract us from exploring open collaborative spaces. The world I walked into around 2009 was seeking to do away with all this nonsense. I have tried to listen and learn as much as I can, sometimes I even dared to put in my 2 cents or ask questions.
How to make it happen?
The biggest hurdle I have found is that I don’t do my work in isolation. As much as I might want to embrace Open Science, when the work is collaborative I am not the one that makes the final call. In a country as small as New Zealand it is difficult to find the critical mass at the intersection of my research interests (and knowledge) and the desire to do work in the open space. If you want to collaborate with the best, you may not be able to be picky on the shared ethos. This is particularly true for those struggling with building a career and getting a permanent position, the advice of those at the hiring table will always sound louder.
The reward system seems at times to be stuck in a place where incentives are (at all levels) stacked against Open Science; “rewards” are distributed at the “researcher” level. Open Research is about a solution to a problem, not to someone’s career advancement (although that should come as a side-effect). It is not surprising then how little value is placed in whether one’s science can be replicated or re-used. Once the paper is out and the bean drops in the jar, our work is done. I doubt that even staffing committees or those evaluating us will even care about pulling those research outputs and reading them to assess their value – if they did we would not need to have things like Impact Factors, h-index and the rest. And here is the irony – we struggle to brand our papers to satisfy a rewards system that will never look beyond its title. At the same time those who care about the content and want to reuse it are limited by whichever restrictions we chose to put at the time of publishing.
So what do we do?
I think we need to be sensitive to the struggle of those that might want to embrace open science, but are trying to negotiate the assessment requirements of their careers. Perhaps getting more people who embrace these principles at staffing and research University Committees might at least provide the opportunity to ask the right questions about “value” and at the right time. If we can get more open minded stances at the hiring level, this will go far in changing people’s attitudes at the bench.
I, for one, find myself in a relatively good position. My continuation was approved a few weeks ago, so I won’t need to face the staffing committee except for promotion. A change in title might be nice – but it is not a deal-breaker, like tenure. I have tried to open my workflow in the past, and learned enough from the experience, and will keep trying until I get it right. I am slowly seeing the shift in my colleagues’ attitudes – less rolling of eyes, a bit more curiosity. For now, let’s call that progress.
I came to meet in person many of those who inspired me through the online discussions since 2009, and they have always provided useful advice, but more importantly support. Turning my workflow to “Open” has been as hard as I anticipated. I have failed more than I have succeeded but always learned something from the experience. And one question that keeps me going is:
What did the public give you the money for?
or the day after the sting
I got the embargoed copy of Science Magazine article on peer review in Open Access earlier this week, which gave me a chance to read it with tranquility. I have to say I really liked it. It was a cool sting, and it exposed many of the flaws in the peer review system. And it did that quite well. There was a high rate of acceptance of a piece of work that did not deserve to see the light. I also immediately reacted to the fact that the sting had only used Open Access journals – cognizant of how that could be misconstrued as a failure of Open Access and detracting from the real issue, which is peer review.
I had enough time to write a blog post, and was lucky enough to be able to link to Michael Eisens’ take on the issue before I posted, so I did not need to get into the nitty gritty of why the take from the sting had to be taken for nothing more than what it was – an anecdotal set of events. Because what it was not, is a scientific study.
One of the things that I found valuable from the sting (or at least my take-home message) was that there is enough information out there to help researchers navigate the Open Access publishing landscape they are so scared of and provided some information on how to choose good journals. The excuse that there are too many predatory journals to justify not publishing in Open Access is now made weaker. It also provided all of us with an opportunity to reflect on the failures of peer review and the value of the traditional publication system.
Or so I thought.
Then the embargo was lifted, and I have been picking up brain bits spilled over twitter, blogs and other social media as the tsunami of heads exploding started. And as the morning alarm clocks went off as the sun rose in different time zones, new waves of brain bits came along.
By now, I could look at the entire ‘special issue’ and what else was in it. Here is where I see the problem.
There were lots of articles talking about science communication. Not one of them could I find (please someone correct me if I am wrong!) that took on the sting to refocus the discussion in the right direction (that is, peer review), nor to reflect on how Science and the AAAS behind it measure up to those issues they so readily seemed to criticise.
I never liked the AAAS – or rather I began disliking it after I got my first invitation to join in the late 1980’s. It seemed that all I needed to do to become a member was send them cash. There was no reason to do that – since obviously, without requiring anyone to endorse me as a “proper scientist” I could not see what that membership said about me other than having the ability to write a check. I was already doing that with the New York Times, and if I couldn’t put that down in my CV, then neither could I put down my membership with AAAS. Nothing gained, nothing lost, move on.
What I didn’t know back at that time, was that that first letter would be the first in a long (long!) series of identical invitations that would periodically arrive in my mailbox where they were be quickly disposed of in the rubbish bin in the corner of the room. I am sure one would be able to find plenty of those in the world’s landfills.
“The vitality of the scientific meeting has given rise to a troubling cottage industry: meetings held more for profit than enlightenment.” (Stone, R., & Jasny, B.)
Wut? Let’s apply the same logic to the AAAS membership – Would we consider that predatory behaviour too?
Let’s move on to peer review.
Moving back to the sting. Yes, they sent a lot of articles out. The article in science seems to me to be delivered from a very high horse, and one with no legs to stand on. Their N is large (perhaps not large enough, but that is beyond the point). Because to each journal they just sent one (n=1; “en equal one”) hoax paper (singular, not plural). I may ask – had they sent say 10 hoax papers to each journal, would each journal have accepted the 10, only 5 or perhaps only 1? Because that makes a difference at the individual journal level. If we are going to accept that such n=1 is enough to make any informed conclusion about whether a journal is predatory or not, then, well, arsenic life. ‘Nuff said.
Let’s take a second look at the arsenic paper. n=1. The arsenic paper was so bad that poor Michael Eisen’s head exploded because readers of his blog actually believed he had sent it in as a hoax – I myself even got caught doing a double-take when I started reading his blog post (but I kept on reading!). That’ll teach him for being such a convincing writer.
So, if n=1 is enough, does that mean Science magazine is ready to add their name to the list of journals that don’t meet the mark? I could not, on their issue, find any reflection on that (please someone correct me if I am wrong!).
… and to open access
But the bigger issue in my view was what appears to be a position of Science on Open Access. Now Science is not Nature. Science is the flagship journal of AAAS. AAAS says it is an organisation “serving science, service society”. Here are some of their mission bullet points:
Enhance communication among scientists, engineers, and the public;
Promote and defend the integrity of science and its use;
Foster education in science and technology for everyone;
Increase public engagement with science and technology; and
How is any of this better served by having their flagship magazine behind a paywall?
Can they support, through scientific data, that having their flagship journal behind a paywall helps achieve any of those goals? Now those are data I would love to see. Because their “special issue” ‘s biased criticism (please someone correct me if I am wrong!) of Open Access seems to suggest so. Now, if they can’t provide a scientific argument as to why we should give them so much money to be members or access their publication, then how are they any different from the “cottage industry” they seem so ready to criticize? Is preying on libraries or readers less bad than on authors? If I purchase a “pay per view” article and don’t like it, or it does not contain the data promised by the abstract, do I get my money back? Or do these paywalled journals just take the money and run? Because, as much as I dislike the predatory open access journals, at least they are putting the papers out there so that we can all croudsource on how much crap they are.
Do I find an issue with they bringing to the attention of their readership the troubled state of the publishing industry? No.
Do I find an issue with some of the articles in the special issue focusing on some of the naughty players in the Open Access landscape? No.
What I do have a problem with, is the apparent lack of reflection on Science’s and AAAS’ own practices (please someone correct me if I am wrong!).
There was an opportunity to step up, and that opportunity was missed. Science might have a shiny coat of wool decorated with double digit impact factors, but I am not buying it.
I am sticking with the New York Times.
[Updated Oct 5 1:19 to add missing link]
The week ends with a series of articles in Science that make you roll your eyes. These articles explore different aspects of the landscape of science communication exposing how broken the system can be at times. The increased pressure to publish scientific results to satisfy some assessors’ need to count beans has not come without a heavy demand on the scientific community that inevitably becomes involved through free editorial and peer review services. For every paper that is published, there are a number of other scientists that take time off their daily work to contribute to the decision of whether the article should be published or not, in principle by assessing the scientific rigor and quality. In many cases, and unless the article is accepted by the first journal it is submitted to, this cycle is repeated. Over. And over. Again. The manuscript is submitted to a new journal, handled by a new editor and most probably reviewed by a new set of peers, this iterated as many times as needed until a journals takes the paper in. And then comes the back and forth of the revision process, modifications to the original article suggested or required through the peer review, until eventually the manuscript is published. Somewhere. Number of beans = n+1. Good on’ya!
But what is the cost?
There just doesn’t seem to be enough time to go this process with the level of rigor it promises to deliver. The rise in multidisciplinary research means that it will be unlikely that a single reviewer can assess the entirety of a manuscript. The feedback we get as editors (or we provide as reviewers) can often be incomplete and miss fundamental scientific flaws. There are pressures to publish and to publish a lot and to do that (and still have something to publish about) we are tempted to minimise the amount of time that we spend in the publication cycle. Marcia McNutt says it in a nutshell :
For science professionals, time is a very precious commodity.
It is then not surprising that the exhaustion of the scientific community would be exploited with the ‘fast food’ equivalent of scientific communication.
The vitality of the scientific meeting has given rise to a troubling cottage industry: meetings held more for profit than enlightenment 
The same applies to some so-called scientific journals. These “predatory” practices as they have come to be known are exhausting.
Science published today the description of a carefully planned sting. Jon Bohannon created a spoof paper that he sent to a long list of Open Access journals . The paper should have been rejected had anyone cared enough to assess the quality of the science and base their decision on that. Instead, the manuscript made it through and got accepted in a number of journals (98 journals rejected it, 157 accepted it). That the paper got accepted in more than one journal did not come as a surprise, but what where it got interesting to me was when he compared those accepting journals against Beall’s predatory journal list. Jeff Beall helps collate a list of predatory Open Access journals, which at least saves us from having to do even more research when trying to decide where to publish our results or what conferences we might want to attend.
Like Batman, Beall is mistrusted by many of those he aims to protect. “What he’s doing is extremely valuable,” says Paul Ginsparg, a physicist at Cornell University who founded arXiv, the preprint server that has become a key publishing platform for many areas of physics. “But he’s a little bit too trigger-happy. 
What Bohannon’s experiment showed was that 82% of the publishers from Beall’s list that received the spoof paper accepted it for publication. There is no excuse to falling prey to these journals and conferences. “I didn’t know” just won’t cut it for much longer.
As Michael Eisen discusses, even though Bohannon used open access journals for his experiment, this lack of rigour seems to ignore paywalls, impact factors and journal prestige. Which raises the following question:
If the system is so broke, it costs so much money in subscriptions and publication fees and sucks so much out of our productive time – then why on earth should we bother?
Don’t get me wrong – sharing our findings is important. But does it all really have to be peer reviewed from the start? Take Mat Todd’s approach, for example, from the Open Source Malaria project. All the science is out there as soon as it comes out of the pipette tip. When I asked him how this changed the way his research cycle worked this is what he said:
We have been focusing on the data and getting the project going, so we have not rushed to get the paper out. The paper is crucial but it is not the all and all. The process has been reversed, we first share the data and all the details of the project as it’s going, then when we have finished the project we move to publishing.
Right. Isn’t this what we should all be doing? I didn’t see Mat Todd’s world collapse. There is plenty of opportunity to provide peer review on the project as it is moving forward. There is no incentive to write the paper immediately, because the information is out there. There is no need to take up time from journal editors and reviewers because the format of the project offers itself to peer review from anyone who is interested in helping get this right.
PeerJ offers a preprint publication service:
“By using this service, authors establish precedent; they can solicit feedback, and they can work on revisions of their manuscript. Once they are ready, they can submit their PrePrint manuscript into the peer reviewed PeerJ journal (although it is not a requirement to do so)”
F1000 Research does something similar:
“F1000Research publishes all submitted research articles rapidly […] making the new research findings open for scrutiny by all who want to read them. This publication then triggers a structured process of post-publication peer review […]”
So yes, you can put your manuscript out there, let peers review it at their leisure, when they actually care and when they have time and focus to actually do a good job. There is really no hurry to move the manuscript to the peer-reviewed journal (PeerJ or any other) because you have already communicated your results, so you might as well go get an experiment done. And if, as a reviewer, you want any credit for your contribution, then you can go to Publons where you can write your review, and if the community thinks you are providing valuable feedback you will be properly rewarded in the form of a DOI. Try to get that kind of recognition from most journals.
But let’s say you are so busy actually getting science done, then you always have FigShare.
“…a repository where users can make all of their research outputs available in a citable, shareable and discoverable manner.”
Because, let’s be honest, other than the bean counters who else is really caring enough about what we publish to justify the amount of nonsense that goes with it?
According to ImpactStory, 20% of the items that were indexed by Web of Science in 2010 received 4 or less PubMed Central citations. So, 4 citations in almost 3 years puts yo at the top 20%.
So my question is: Is this nonsense really worth our time?
 McNutt, M. (2013). Improving Scientific Communication. Science, 342(6154), 13–13. doi:10.1126/science.1246449
 Stone, R., & Jasny, B. (2013). Scientific Discourse: Buckling at the Seams. Science, 342(6154), 56–57. doi:10.1126/science.342.6154.56
 Bohannon, J. (2013). Who’s Afraid of Peer Review? Science, 342(6154), 60–65. doi:10.1126/science.342.6154.60
2012 was a really interesting year for Open Research.
The year started with a boycott to Elsevier (The Cost of Knowledge) , soon followed in May by a petition at We The People in the US, asking the US government to “Require free access over the Internet to scientific journal articles arising from taxpayer-funded research.”. By June we had The Royal Society publishing a paper on “science as an open enterprise” [pdf] saying:
The opportunities of intelligently open research data are exemplified in a number of areas of science.With these experiences as a guide, this report argues that it is timely to accelerate and coordinate change, but in ways that are adapted to the diversity of the scientific enterprise and the interests of: scientists, their institutions, those that fund, publish and use their work and the public.
The Finch report had a large share of media coverage [pdf] -
Our key conclusion, therefore, is that a clear policy direction should be set to support the publication of research results in open access or hybrid journals funded by APCs. A clear policy direction of that kind from Government, the Funding Councils and the Research Councils would have a major effect in stimulating, guiding and accelerating the shift to open access.
By July the UK government announced the support for the Open Access recommendations from the Finch Report to ensure:
Walk-in rights for the general public, so they can have free access to global research publications owned by members of the UK Publishers’ Association, via public libraries. [and] Extending the licensing of access enjoyed by universities to high technology businesses for a modest charge.
The Research Councils OK joined by publishing a policy on OA (recently updated) that required [pdf] :
Where the RCUK OA block grant is used to pay Article Processing Charges for a paper, the paper must be made Open Accesess immediately at the time of on line publication, using the Creative Commons Attribution (CC BY) licence.
By the time that Open Access Week came around, there was plenty to discuss. The discussion of Open Access emphasised more strongly the re-use licences under which the work was published. The discussion also included some previous analysis showing that there are benefits from publishing in Open Access that affect economies:
adopting this model could lead to annual savings of around EUR 70 million in Denmark, EUR 133 in The Netherlands and EUR 480 million in the UK.
And in November, the New Zealand Open Source Awards recognised Open Science fro the first time too.
2013 promises not to fall behind
This year offers good opportunities to celebrate local and international advocates of Open Science.
The Obama administration not only responded to last year’s petition by issuing a memorandum geared towards making Federally funded research adopt open access policies, but is now also seeking “Outstanding Open Science Champions of Change” . Nominations for this close on May 14, 2013. Simultaneously, The Public Library of Science, Google and the Wellcome Trust , together with a number of allies are sponsoring the “Accelerating Science Award Program” which seeks to recognise and reward individuals, groups or projects that have used Open Access scientific works in innovative manners. The deadline for this award is June 15.
Last year Peter Griffin wrote:
The policy shift in the UK will open up access to the work of New Zealand scientists by default as New Zealanders are regularly co-authors on papers paid for by UK Research Councils funds. But hopefully it will also lead to some introspection about our own open access policies here.
There was some reflection at the NZAU Open Research Conference which led to the Tasman Declaration – (which I encourage you to sign) and those of us who were involved in it are hoping good things will come out of it. While that work continues, I will be revisiting the nominations of last years Open Science category for the NZ Open Source Awards to make my nominations for the two awards mentioned above.
I certainly look forward to this year – I will continue to work closely with Creative Commons Aotearoa New Zealand and with NZ AU Open Research to make things happen, and continue to put my 2 cents as an Academic Editor for PLOS ONE and PeerJ.
There is no question that the voice of Open Access is now loud and clear – and over the last year it has also become a voice that is not only being heard, but that it also generating the kinds of responses that will lead to real change.
When a President annouces a scientific project as publicly as President Obama did, the world listens. The US is planning to put signifcant resources behind a huge effort to try to map the brain. There has been a lot said about this BRAIN project , and I have been quietly reading trying to make sense of the disparate reactions that this ‘launch’ had – and trying to escape the hype.
I can understand the appeal – the brain is a fascinating invention of nature. I fell in love with its mysteries as an undergraudate in Argentina and I continue to be fascinated by every new finding. What fascinates me about the discipline is that, unlike trying to understand the kidney for example, neuroscience consists of the brain trying to understand itself . That we can even ask the right questions, let alone design and perform the experiments to answer them is what gets me out of bed in the morning.
Trying to understand the brain is definitely not a 21st Century thing. For centuries we have been asking what makes animals behave the way they do. And yet we still don’t really know what it is about our brains that makes us the only species able to ask the right questions, and design and perform the experiments to answer them?
Many of us neuroscientists might agree that how we think about the brain came about from two major sets of finding. Towards the end of the 19th Centrury it finally became accepted that the brain, like other parts of the body, was made up of cells. It was Santiago Ramon y Cajal’s tireless work (with the invaluable assistance of his brother Pedro) that was fundamental in this shift. This meant that we could apply the knowledge of cell biology to the brain. The second game changer was the demonstration that neurons could actively produce electric signals. In doing so, Hodgkin and Huxley beautifully put to rest the old argument between Volta and Galvani. This meant we had a grip on how information was coded in the brain.
From this pioneering work, neuroscience evolved directing most of its attention to the neurons and their electrical activity. After all, that is where the key to understanding the brain was supposed to be found. Most of what happened over the twentieth century was based on this premise. Neurons are units that integrate inputs and put together an adequate output passing the information to another neuron or set of neurons down the line until you get to the end. In a way, this view of the brain is not too different from a wiring diagram of an electronic circuit.
Trying to understand the wiring of the brain, however, is, not easy. There are thousands and thousands of neurons each with a multitude of inputs and outputs. You can quickly run out of ink trying to draw the wiring diagram, It is because of this complexity that neuroscientists (just like scientists in many other disciplines) turn to simpler models. We have come to know some secrets about learning from studying the sea slug Aplysia, about how the brain gets put together from flies and frogs, and even about how neurons are born in adult brains from singing canaries. What all these models have in common is that we can tie pretty well a very specific aspect of brain function to a circuit we can define rather well. And we have learned, and keep learning, heaps from these models. The main thing we learn (and the reason why these models continue to be so useful and fundamental for progress) is that the ‘basics’ of brains are quite universal – and once we know those basics well, it is a lot easier to work out the specifics in more complex brains.
Trying to understand the architecture of circuits has proven to be of major value (and this is what the connectome is about). But building the connections is not just about drawing the wires – you need to build in some variability – some connections excite while others inhibit, some neurons respond in predictable linear ways, others don’t. And when you are done with that, you will still need to start thinking about the stuff we have not spent a lot of time thinking about: those other cells (glia) and the stuff that exists in between cells (the extracellular matrix). More and more, we are being reminded that glia and extracellular matrix do more than just be there to support the neurons.
So it is not surprising to find some skepticism around these large brain projects. Over at Scientific American, John Hogan raises some valid criticisms about how realistic the ambitions of these projects are given the current state of neuroscience (read him here and here). Other lines of skepticism center around the involvement of DARPA in the BRAIN project (read Peter Freed’s views on that here or Luke Dittrich’s views here). Others criticize the lack of a clear roadmap (read Erin McKiernan’s views here). Others have expressed their concerns that too strong expectations on advancing our knowledge of the human brain will overlook the importance of exploring simpler circuits, something that had been stated clearly in the original proposal .
Is now the right time?
Back in the ‘90’s the decade of the brain had insinuated it would solve many of these problems, I don’t think it did. Despite the neuroscience revolution from about a century ago and the work that followed, we still have not been able to solve the mysteries of the brain.
But this decade is somewhat different. I am reading more and more stuff that has to do with the emergent properties of the brain – not just the properties of the neurons. And for the first time since I started my road as a neuroscientists I am being able to ask slightly different questions. I did not think that successful brain machine interfaces would be something I’d get to see in my lifetime. And I was wrong. Even less did I think I would get to see brain to brain interfaces. But the works is moving forward there too.
The BRAIN project is not alone. In Europe the Human Brain Project received similar attention. We all expect that such boosts in funding for multidisciplinary research will go a long way in making things move forward.
It is inevitable to think of the parallels of the approach to these Big Brain projects and the National Science Challenges – which are wonderfully expressed by John Pickering here.
I think that Erin McKiernan’s cautionary words about the BRAIN project might be quite appropriate for both:
Investing in neuroscience is a great idea, but this is not a general boost in funding for neuroscience research. This is concentrating funds on one project, putting many eggs in one basket.
 Brain Research through Advancing Innovative Neurotechnologies,
 Alivisatos, A. P., Chun, M., Church, G. M., Greenspan, R. J., Roukes, M. L., & Yuste, R. (2012). The Brain Activity Map Project and the Challenge of Functional Connectomics. Neuron, 74(6), 970–974. doi:10.1016/j.neuron.2012.06.006