“Successful human-to-human brain interface” screamed the headlines – and so there I was clicking my way around the internet to read about it.
Those who know me also know that this is the kind of stuff what makes me tick, ever since learning about the pioneering work of Miguel Nicolelis. A bit over a decade ago I first heard of him, a Brazilian scientist working at Duke University in the Department where I spent a short tenure before moving to New Zealand. What I heard at the time was that he was attempting to extract signals from a brain and use them to control a robotic arm. I was quite puzzled by the proposition, I had been trained with the idea that each neuron in the brain is important and responsible of taking care of a specific bit of information. so thought I’d never get to see the idea succeed within my lifetime.
Nicolelis’ paradigm was relatively straightforward. He was to record the activity of a small area of the brain while the animal moved his arm, and identify what was going on in the brain during different arm movements. Activity combination A means arm up, combination B arm down, etc. He then would use this code to program a robotic arm so that the robotic it moved up when combination A was sent to it, down when combination B was sent, and so on. The third step was to connect the actual live brain to the robotic arm, and have the monkey learn that it had the power to move it himself.
What puzzled me at the time (and the reason that I thought his experiment couldn’t work) was that he was going to attempt to do this by recording the activity from what I could best describe as only a handful of neurons, and with rather limited control over the choice of those neurons. I figured this was not going to give him enough (or even the right) information to guide the movement of the robotic arm. But I was still really attracted to the idea. Not only did I love his deliberate imagination and how he was thinking outside the box,, but also, if he was successful, it would mean I’d have to start thinking about how the brain works in a completely different way.
It was not long before the word came out he had done it. He had managed to extract enough code from the brain activity that was going on during arm movements to program the robotic arm, and soon enough he had the monkey control the arm directly. And then something even much more interesting (at least to me) happened – the monkey learned that he could move the robotic arm without having to move his own arm. In other words, the monkey had ‘mapped’ the robotic arm into his brain as if it was his own. And that meant that it was time to revisit how I thought that brains worked.
I followed his work, and then in 2010 got a chance to have a chat with him at SciFoo. It was there that he told me how he was doing similar experiments but playing with avatars instead of real life robotic arms. how he saw this technology being used to build exoskeletons to provide mobility to paralyzed patients, and how he thought he was close to getting a brain to brain interface in rats.
A brain to brain interface?
Well, if the first set of experiments had challenged my thinking I was up for a new intellectual journey. Although by now I had learned my lesson.
I finally got to see the published results of these experiment earlier this year. Again, the proposition was straightforward. Have a rat learn a task in one room, collect the code and send that information to a second rat elsewhere and see if the second rat has been able to capture the learning. You can read more about this experiment from Mo Costandi here.
So when I heard the news about human to human brain interfaces, I inevitably got excited.
The paradigm of this preliminary study (which has not been published in a peer reviewed journal) is simple. One person is trying to play a video game imagining he pushes a firing button at the right time, and a second person elsewhere who actually needs to push the firing button for the game. The activity from the brain of the first person (this time recorded from the scalp surface) is transmitted to the brain of the second person through a magnetic coil (a device that is becoming commonly used to stimulate or inhibit specific parts of the brain.)
But is this really a bran to brain intterface?
Although the brain code of the first subject ‘imagining’ moving the finger was extracted (much like the Nicolelis group did back a decade ago), there is nothing about that code that is ‘decoded’ by the subject pressing the button. That magnetic coils can be used to elicit movement is not new. What part of the body moves depends on where on top of the head the coil is placed, and the type of zapping that is sent through the coil. So reading their description of the experiment, it seems that the signal that is being sent is a turn on/off to the coil, not a motor code in itself. The response from the second subject does not seem to need the decoding that signal – rather responding to a specific stimulation (not too unlike the kicking we do when someone tests our knee jerk reflex, or closing our eyelids when someone shines a bright light at our eyes).
I am also uncertain of how much the second subject knows about the experiment and I can’t help but wonder how much of the movement is self generated in response to the firing of the coil. Any awake person participating whose finger is put on top of a keyboard key and has a piece of metal on their head wouldn’t take too long to figure out how the experiment is meant to run.
Which brings me back to the title of this post.
There is nothing wrong with sharing the group’s progress, In fact I think it is great, and I wish more of us were doing this. But I am less clear about what is so novel and what it contribute to our understanding of how the brain works to justify the hype.
This is a missed opportunity. There is value in their press release: here is a group that is sharing preliminary data in a very open way. This in itself is the news because this is good for science This should have been the hype.
Did you know?
- In 1978 a machine to brain interface (says Wikipedia) was successfully tested in a blind patient. Apparently progress was hindered by the patient needing to be connected to a large mainframe computer
- By 2006 a patient was able to operate a computer mouse and prosthetic hand using a brain machine interface that recorded brain activity using electrodes placed inside the brain. Watch the video.
- In 2009 using brain activity recorded from surface scalp electrodes to control a computer text editor, a scientist was able to send a tweet
- Carmena, J. M., Lebedev, M. A., Crist, R. E., O’Doherty, J. E., Santucci, D. M., Dimitrov, D. F., … Nicolelis, M. A. L. (2003). Learning to Control a Brain–Machine Interface for Reaching and Grasping by Primates. PLoS Biol, 1(2), e42. doi:10.1371/journal.pbio.0000042
- Pais-Vieira, M., Lebedev, M., Kunicki, C., Wang, J., & Nicolelis, M. A. L. (2013). A Brain-to-Brain Interface for Real-Time Sharing of Sensorimotor Information. Scientific Reports, 3. doi:10.1038/srep01319
- O’Doherty, J. E., Lebedev, M. A., Ifft, P. J., Zhuang, K. Z., Shokur, S., Bleuler, H., & Nicolelis, M. A. L. (2011). Active tactile exploration using a brain-machine-brain interface. Nature, 479(7372), 228–231. doi:10.1038/nature10489
New Zealand has its first Open Access Policy thanks to Lincoln University. We have been lagging behind in the OA landscape when it comes to tertiary institutions, and Lincoln’s position is a great step.
From their website:
Lincoln University takes the position that if public funding has supported the creation of research or other content then it’s reasonable to make it publicly accessible. So our new Open Access Policy endorses making this content openly and freely available as the preferred option.
That the public should have access to the outputs of the work they fund through their taxes has been a compelling argument around other international policies. A similar position statement was made in the Tasman Declaration. New Zealand’s NZGOAL, released in 2010 provides a similar framework for State Service Agencies, but tertiary institutions are not included in the framework despite receiving substantial public funding in several forms. It has been then up to the individual universities to decide whether the principles of NZGOAL are adopted. Lincoln University has taken a leadership role for the tertiary sector, and I am hopeful that other NZ institutions will follow their lead.
I have been often asked where the funds to pay for Open Access publishing will come from, at least in relation to the publication of research articles. What we sometimes seem to forget is that we are already paying for these costs through the portion of the overheads of our grants that go towards library costs for access and re-use of copyrighted material. In many instances, too, the charges for publication of, say a colour figure, can be equal or more than what it would cost to publish the same article in an Open Access journal. The maths just don’t work for me.
What we also seem to sometimes forget is that most publishers will allow the posting of the peer reviewed version of the author’s manuscript in their institutional repository. Why aren’t researchers not doing this more widely is not very clear.
And here is where Lincoln strikes a nice balance: posting in the institutional repository (aka Green Open Access) comes at no extra financial cost to the individual researcher. IT will be interesting to see how the policy is implemented at Lincoln.
But is it enough?
It is a great start.
One of the issues with the Open Access discussion is that it sometimes the issue of copyright (and the resulting license to reuse) does not always feature prominently in the conversation. I (personally) consider that fronting the fee with a journal to make a paper open access when I still need to transfer the copyright to the journal is a waste of money. There is not much added value to the version of the manuscript that I can place in the repository and the final journal version (other than perhaps aesthetics). I am happy, however, to pay an OA fee when this comes attached with a Creative Commons licence that allows reuse, including commercial re-use, because that is where the true value of Open Access is. Lincoln University takes a good step by encouraging the use of Creative Commons licences – but In their absence the articles should still be made free to view through the institutional repository.
How is NZ doing in OA?
The articles that are deposited in Institutional repositories in New Zealand can be found through nzresarch.org.nz. Today’s search returned 14,273 journal articles. It is unfortunate that the great majority of them (13,986) are “all rights reserved” and only 232 allow commercial reuse. If we really want to benefit from our research to drive innovation, then we should be doing better.
So where to next?
Lincoln University has taken a great first step, and hopefully the other NZ research institutions will follow. I am also hoping we will start to see a similar move from NZ funding agencies encouraging researchers to adopt the principles of NZGOAL or to place Open Access mandates on their funded research.
Perhaps next time a funding body or organisation asks you to donate money for their research to help cure a condition, you might ask them if they have an Open Access policy
A few days ago I got an email from a colleague of mine pointing me to a video about birds of paradise. I am happy I went and looked at it because it is quite amazing. There is no question why this group of birds stand apart from others – they are not beautiful to watch, but their behaviour, too, is quite amazing. Watch:
There are other birds that I find absolutely amazing. The Lyrebird for example, incorporates into its song sounds that it hears as it goes about life. There are two types of song learning birds (songbirds). Some will learn to imitate a song from an adult tutor as they are growing up, and pretty much sing that song as adults. Others can continue to incorporate elements to their song as adults. The lyrebird falls into this last group. But what I find amazing about the lyrebird is not that it incorporates new song elements, but that some of those sounds are not “natural” sounds. Watch:
Another amazing bird is the New Caledonian crow. A while back Gavin Hunt (now at the University of Auckland) came to find out that these birds were able to manufacture tools in the wild. They modify leaves and twigs from local plants to make different types of tools which they then use to get food. This finding spurred a large body of work on bird intelligence. Watch:
And if you are interested of where these wonderful animals all came from, there is a fantastic blog by Ed Yong over at national Geographic. Read:
2012 was a really interesting year for Open Research.
The year started with a boycott to Elsevier (The Cost of Knowledge) , soon followed in May by a petition at We The People in the US, asking the US government to “Require free access over the Internet to scientific journal articles arising from taxpayer-funded research.”. By June we had The Royal Society publishing a paper on “science as an open enterprise” [pdf] saying:
The opportunities of intelligently open research data are exemplified in a number of areas of science.With these experiences as a guide, this report argues that it is timely to accelerate and coordinate change, but in ways that are adapted to the diversity of the scientific enterprise and the interests of: scientists, their institutions, those that fund, publish and use their work and the public.
The Finch report had a large share of media coverage [pdf] -
Our key conclusion, therefore, is that a clear policy direction should be set to support the publication of research results in open access or hybrid journals funded by APCs. A clear policy direction of that kind from Government, the Funding Councils and the Research Councils would have a major effect in stimulating, guiding and accelerating the shift to open access.
By July the UK government announced the support for the Open Access recommendations from the Finch Report to ensure:
Walk-in rights for the general public, so they can have free access to global research publications owned by members of the UK Publishers’ Association, via public libraries. [and] Extending the licensing of access enjoyed by universities to high technology businesses for a modest charge.
The Research Councils OK joined by publishing a policy on OA (recently updated) that required [pdf] :
Where the RCUK OA block grant is used to pay Article Processing Charges for a paper, the paper must be made Open Accesess immediately at the time of on line publication, using the Creative Commons Attribution (CC BY) licence.
By the time that Open Access Week came around, there was plenty to discuss. The discussion of Open Access emphasised more strongly the re-use licences under which the work was published. The discussion also included some previous analysis showing that there are benefits from publishing in Open Access that affect economies:
adopting this model could lead to annual savings of around EUR 70 million in Denmark, EUR 133 in The Netherlands and EUR 480 million in the UK.
And in November, the New Zealand Open Source Awards recognised Open Science fro the first time too.
2013 promises not to fall behind
This year offers good opportunities to celebrate local and international advocates of Open Science.
The Obama administration not only responded to last year’s petition by issuing a memorandum geared towards making Federally funded research adopt open access policies, but is now also seeking “Outstanding Open Science Champions of Change” . Nominations for this close on May 14, 2013. Simultaneously, The Public Library of Science, Google and the Wellcome Trust , together with a number of allies are sponsoring the “Accelerating Science Award Program” which seeks to recognise and reward individuals, groups or projects that have used Open Access scientific works in innovative manners. The deadline for this award is June 15.
Last year Peter Griffin wrote:
The policy shift in the UK will open up access to the work of New Zealand scientists by default as New Zealanders are regularly co-authors on papers paid for by UK Research Councils funds. But hopefully it will also lead to some introspection about our own open access policies here.
There was some reflection at the NZAU Open Research Conference which led to the Tasman Declaration – (which I encourage you to sign) and those of us who were involved in it are hoping good things will come out of it. While that work continues, I will be revisiting the nominations of last years Open Science category for the NZ Open Source Awards to make my nominations for the two awards mentioned above.
I certainly look forward to this year – I will continue to work closely with Creative Commons Aotearoa New Zealand and with NZ AU Open Research to make things happen, and continue to put my 2 cents as an Academic Editor for PLOS ONE and PeerJ.
There is no question that the voice of Open Access is now loud and clear – and over the last year it has also become a voice that is not only being heard, but that it also generating the kinds of responses that will lead to real change.
When a President annouces a scientific project as publicly as President Obama did, the world listens. The US is planning to put signifcant resources behind a huge effort to try to map the brain. There has been a lot said about this BRAIN project , and I have been quietly reading trying to make sense of the disparate reactions that this ‘launch’ had – and trying to escape the hype.
I can understand the appeal – the brain is a fascinating invention of nature. I fell in love with its mysteries as an undergraudate in Argentina and I continue to be fascinated by every new finding. What fascinates me about the discipline is that, unlike trying to understand the kidney for example, neuroscience consists of the brain trying to understand itself . That we can even ask the right questions, let alone design and perform the experiments to answer them is what gets me out of bed in the morning.
Trying to understand the brain is definitely not a 21st Century thing. For centuries we have been asking what makes animals behave the way they do. And yet we still don’t really know what it is about our brains that makes us the only species able to ask the right questions, and design and perform the experiments to answer them?
Many of us neuroscientists might agree that how we think about the brain came about from two major sets of finding. Towards the end of the 19th Centrury it finally became accepted that the brain, like other parts of the body, was made up of cells. It was Santiago Ramon y Cajal’s tireless work (with the invaluable assistance of his brother Pedro) that was fundamental in this shift. This meant that we could apply the knowledge of cell biology to the brain. The second game changer was the demonstration that neurons could actively produce electric signals. In doing so, Hodgkin and Huxley beautifully put to rest the old argument between Volta and Galvani. This meant we had a grip on how information was coded in the brain.
From this pioneering work, neuroscience evolved directing most of its attention to the neurons and their electrical activity. After all, that is where the key to understanding the brain was supposed to be found. Most of what happened over the twentieth century was based on this premise. Neurons are units that integrate inputs and put together an adequate output passing the information to another neuron or set of neurons down the line until you get to the end. In a way, this view of the brain is not too different from a wiring diagram of an electronic circuit.
Trying to understand the wiring of the brain, however, is, not easy. There are thousands and thousands of neurons each with a multitude of inputs and outputs. You can quickly run out of ink trying to draw the wiring diagram, It is because of this complexity that neuroscientists (just like scientists in many other disciplines) turn to simpler models. We have come to know some secrets about learning from studying the sea slug Aplysia, about how the brain gets put together from flies and frogs, and even about how neurons are born in adult brains from singing canaries. What all these models have in common is that we can tie pretty well a very specific aspect of brain function to a circuit we can define rather well. And we have learned, and keep learning, heaps from these models. The main thing we learn (and the reason why these models continue to be so useful and fundamental for progress) is that the ‘basics’ of brains are quite universal – and once we know those basics well, it is a lot easier to work out the specifics in more complex brains.
Trying to understand the architecture of circuits has proven to be of major value (and this is what the connectome is about). But building the connections is not just about drawing the wires – you need to build in some variability – some connections excite while others inhibit, some neurons respond in predictable linear ways, others don’t. And when you are done with that, you will still need to start thinking about the stuff we have not spent a lot of time thinking about: those other cells (glia) and the stuff that exists in between cells (the extracellular matrix). More and more, we are being reminded that glia and extracellular matrix do more than just be there to support the neurons.
So it is not surprising to find some skepticism around these large brain projects. Over at Scientific American, John Hogan raises some valid criticisms about how realistic the ambitions of these projects are given the current state of neuroscience (read him here and here). Other lines of skepticism center around the involvement of DARPA in the BRAIN project (read Peter Freed’s views on that here or Luke Dittrich’s views here). Others criticize the lack of a clear roadmap (read Erin McKiernan’s views here). Others have expressed their concerns that too strong expectations on advancing our knowledge of the human brain will overlook the importance of exploring simpler circuits, something that had been stated clearly in the original proposal .
Is now the right time?
Back in the ‘90’s the decade of the brain had insinuated it would solve many of these problems, I don’t think it did. Despite the neuroscience revolution from about a century ago and the work that followed, we still have not been able to solve the mysteries of the brain.
But this decade is somewhat different. I am reading more and more stuff that has to do with the emergent properties of the brain – not just the properties of the neurons. And for the first time since I started my road as a neuroscientists I am being able to ask slightly different questions. I did not think that successful brain machine interfaces would be something I’d get to see in my lifetime. And I was wrong. Even less did I think I would get to see brain to brain interfaces. But the works is moving forward there too.
The BRAIN project is not alone. In Europe the Human Brain Project received similar attention. We all expect that such boosts in funding for multidisciplinary research will go a long way in making things move forward.
It is inevitable to think of the parallels of the approach to these Big Brain projects and the National Science Challenges – which are wonderfully expressed by John Pickering here.
I think that Erin McKiernan’s cautionary words about the BRAIN project might be quite appropriate for both:
Investing in neuroscience is a great idea, but this is not a general boost in funding for neuroscience research. This is concentrating funds on one project, putting many eggs in one basket.
 Brain Research through Advancing Innovative Neurotechnologies,
 Alivisatos, A. P., Chun, M., Church, G. M., Greenspan, R. J., Roukes, M. L., & Yuste, R. (2012). The Brain Activity Map Project and the Challenge of Functional Connectomics. Neuron, 74(6), 970–974. doi:10.1016/j.neuron.2012.06.006