“Successful human-to-human brain interface” screamed the headlines – and so there I was clicking my way around the internet to read about it.
Those who know me also know that this is the kind of stuff what makes me tick, ever since learning about the pioneering work of Miguel Nicolelis. A bit over a decade ago I first heard of him, a Brazilian scientist working at Duke University in the Department where I spent a short tenure before moving to New Zealand. What I heard at the time was that he was attempting to extract signals from a brain and use them to control a robotic arm. I was quite puzzled by the proposition, I had been trained with the idea that each neuron in the brain is important and responsible of taking care of a specific bit of information. so thought I’d never get to see the idea succeed within my lifetime.
Nicolelis’ paradigm was relatively straightforward. He was to record the activity of a small area of the brain while the animal moved his arm, and identify what was going on in the brain during different arm movements. Activity combination A means arm up, combination B arm down, etc. He then would use this code to program a robotic arm so that the robotic it moved up when combination A was sent to it, down when combination B was sent, and so on. The third step was to connect the actual live brain to the robotic arm, and have the monkey learn that it had the power to move it himself.
What puzzled me at the time (and the reason that I thought his experiment couldn’t work) was that he was going to attempt to do this by recording the activity from what I could best describe as only a handful of neurons, and with rather limited control over the choice of those neurons. I figured this was not going to give him enough (or even the right) information to guide the movement of the robotic arm. But I was still really attracted to the idea. Not only did I love his deliberate imagination and how he was thinking outside the box,, but also, if he was successful, it would mean I’d have to start thinking about how the brain works in a completely different way.
It was not long before the word came out he had done it. He had managed to extract enough code from the brain activity that was going on during arm movements to program the robotic arm, and soon enough he had the monkey control the arm directly. And then something even much more interesting (at least to me) happened – the monkey learned that he could move the robotic arm without having to move his own arm. In other words, the monkey had ‘mapped’ the robotic arm into his brain as if it was his own. And that meant that it was time to revisit how I thought that brains worked.
I followed his work, and then in 2010 got a chance to have a chat with him at SciFoo. It was there that he told me how he was doing similar experiments but playing with avatars instead of real life robotic arms. how he saw this technology being used to build exoskeletons to provide mobility to paralyzed patients, and how he thought he was close to getting a brain to brain interface in rats.
A brain to brain interface?
Well, if the first set of experiments had challenged my thinking I was up for a new intellectual journey. Although by now I had learned my lesson.
I finally got to see the published results of these experiment earlier this year. Again, the proposition was straightforward. Have a rat learn a task in one room, collect the code and send that information to a second rat elsewhere and see if the second rat has been able to capture the learning. You can read more about this experiment from Mo Costandi here.
So when I heard the news about human to human brain interfaces, I inevitably got excited.
The paradigm of this preliminary study (which has not been published in a peer reviewed journal) is simple. One person is trying to play a video game imagining he pushes a firing button at the right time, and a second person elsewhere who actually needs to push the firing button for the game. The activity from the brain of the first person (this time recorded from the scalp surface) is transmitted to the brain of the second person through a magnetic coil (a device that is becoming commonly used to stimulate or inhibit specific parts of the brain.)
But is this really a bran to brain intterface?
Although the brain code of the first subject ‘imagining’ moving the finger was extracted (much like the Nicolelis group did back a decade ago), there is nothing about that code that is ‘decoded’ by the subject pressing the button. That magnetic coils can be used to elicit movement is not new. What part of the body moves depends on where on top of the head the coil is placed, and the type of zapping that is sent through the coil. So reading their description of the experiment, it seems that the signal that is being sent is a turn on/off to the coil, not a motor code in itself. The response from the second subject does not seem to need the decoding that signal – rather responding to a specific stimulation (not too unlike the kicking we do when someone tests our knee jerk reflex, or closing our eyelids when someone shines a bright light at our eyes).
I am also uncertain of how much the second subject knows about the experiment and I can’t help but wonder how much of the movement is self generated in response to the firing of the coil. Any awake person participating whose finger is put on top of a keyboard key and has a piece of metal on their head wouldn’t take too long to figure out how the experiment is meant to run.
Which brings me back to the title of this post.
There is nothing wrong with sharing the group’s progress, In fact I think it is great, and I wish more of us were doing this. But I am less clear about what is so novel and what it contribute to our understanding of how the brain works to justify the hype.
This is a missed opportunity. There is value in their press release: here is a group that is sharing preliminary data in a very open way. This in itself is the news because this is good for science This should have been the hype.
Did you know?
- In 1978 a machine to brain interface (says Wikipedia) was successfully tested in a blind patient. Apparently progress was hindered by the patient needing to be connected to a large mainframe computer
- By 2006 a patient was able to operate a computer mouse and prosthetic hand using a brain machine interface that recorded brain activity using electrodes placed inside the brain. Watch the video.
- In 2009 using brain activity recorded from surface scalp electrodes to control a computer text editor, a scientist was able to send a tweet
- Carmena, J. M., Lebedev, M. A., Crist, R. E., O’Doherty, J. E., Santucci, D. M., Dimitrov, D. F., … Nicolelis, M. A. L. (2003). Learning to Control a Brain–Machine Interface for Reaching and Grasping by Primates. PLoS Biol, 1(2), e42. doi:10.1371/journal.pbio.0000042
- Pais-Vieira, M., Lebedev, M., Kunicki, C., Wang, J., & Nicolelis, M. A. L. (2013). A Brain-to-Brain Interface for Real-Time Sharing of Sensorimotor Information. Scientific Reports, 3. doi:10.1038/srep01319
- O’Doherty, J. E., Lebedev, M. A., Ifft, P. J., Zhuang, K. Z., Shokur, S., Bleuler, H., & Nicolelis, M. A. L. (2011). Active tactile exploration using a brain-machine-brain interface. Nature, 479(7372), 228–231. doi:10.1038/nature10489
When a President annouces a scientific project as publicly as President Obama did, the world listens. The US is planning to put signifcant resources behind a huge effort to try to map the brain. There has been a lot said about this BRAIN project , and I have been quietly reading trying to make sense of the disparate reactions that this ‘launch’ had – and trying to escape the hype.
I can understand the appeal – the brain is a fascinating invention of nature. I fell in love with its mysteries as an undergraudate in Argentina and I continue to be fascinated by every new finding. What fascinates me about the discipline is that, unlike trying to understand the kidney for example, neuroscience consists of the brain trying to understand itself . That we can even ask the right questions, let alone design and perform the experiments to answer them is what gets me out of bed in the morning.
Trying to understand the brain is definitely not a 21st Century thing. For centuries we have been asking what makes animals behave the way they do. And yet we still don’t really know what it is about our brains that makes us the only species able to ask the right questions, and design and perform the experiments to answer them?
Many of us neuroscientists might agree that how we think about the brain came about from two major sets of finding. Towards the end of the 19th Centrury it finally became accepted that the brain, like other parts of the body, was made up of cells. It was Santiago Ramon y Cajal’s tireless work (with the invaluable assistance of his brother Pedro) that was fundamental in this shift. This meant that we could apply the knowledge of cell biology to the brain. The second game changer was the demonstration that neurons could actively produce electric signals. In doing so, Hodgkin and Huxley beautifully put to rest the old argument between Volta and Galvani. This meant we had a grip on how information was coded in the brain.
From this pioneering work, neuroscience evolved directing most of its attention to the neurons and their electrical activity. After all, that is where the key to understanding the brain was supposed to be found. Most of what happened over the twentieth century was based on this premise. Neurons are units that integrate inputs and put together an adequate output passing the information to another neuron or set of neurons down the line until you get to the end. In a way, this view of the brain is not too different from a wiring diagram of an electronic circuit.
Trying to understand the wiring of the brain, however, is, not easy. There are thousands and thousands of neurons each with a multitude of inputs and outputs. You can quickly run out of ink trying to draw the wiring diagram, It is because of this complexity that neuroscientists (just like scientists in many other disciplines) turn to simpler models. We have come to know some secrets about learning from studying the sea slug Aplysia, about how the brain gets put together from flies and frogs, and even about how neurons are born in adult brains from singing canaries. What all these models have in common is that we can tie pretty well a very specific aspect of brain function to a circuit we can define rather well. And we have learned, and keep learning, heaps from these models. The main thing we learn (and the reason why these models continue to be so useful and fundamental for progress) is that the ‘basics’ of brains are quite universal – and once we know those basics well, it is a lot easier to work out the specifics in more complex brains.
Trying to understand the architecture of circuits has proven to be of major value (and this is what the connectome is about). But building the connections is not just about drawing the wires – you need to build in some variability – some connections excite while others inhibit, some neurons respond in predictable linear ways, others don’t. And when you are done with that, you will still need to start thinking about the stuff we have not spent a lot of time thinking about: those other cells (glia) and the stuff that exists in between cells (the extracellular matrix). More and more, we are being reminded that glia and extracellular matrix do more than just be there to support the neurons.
So it is not surprising to find some skepticism around these large brain projects. Over at Scientific American, John Hogan raises some valid criticisms about how realistic the ambitions of these projects are given the current state of neuroscience (read him here and here). Other lines of skepticism center around the involvement of DARPA in the BRAIN project (read Peter Freed’s views on that here or Luke Dittrich’s views here). Others criticize the lack of a clear roadmap (read Erin McKiernan’s views here). Others have expressed their concerns that too strong expectations on advancing our knowledge of the human brain will overlook the importance of exploring simpler circuits, something that had been stated clearly in the original proposal .
Is now the right time?
Back in the ‘90’s the decade of the brain had insinuated it would solve many of these problems, I don’t think it did. Despite the neuroscience revolution from about a century ago and the work that followed, we still have not been able to solve the mysteries of the brain.
But this decade is somewhat different. I am reading more and more stuff that has to do with the emergent properties of the brain – not just the properties of the neurons. And for the first time since I started my road as a neuroscientists I am being able to ask slightly different questions. I did not think that successful brain machine interfaces would be something I’d get to see in my lifetime. And I was wrong. Even less did I think I would get to see brain to brain interfaces. But the works is moving forward there too.
The BRAIN project is not alone. In Europe the Human Brain Project received similar attention. We all expect that such boosts in funding for multidisciplinary research will go a long way in making things move forward.
It is inevitable to think of the parallels of the approach to these Big Brain projects and the National Science Challenges – which are wonderfully expressed by John Pickering here.
I think that Erin McKiernan’s cautionary words about the BRAIN project might be quite appropriate for both:
Investing in neuroscience is a great idea, but this is not a general boost in funding for neuroscience research. This is concentrating funds on one project, putting many eggs in one basket.
 Brain Research through Advancing Innovative Neurotechnologies,
 Alivisatos, A. P., Chun, M., Church, G. M., Greenspan, R. J., Roukes, M. L., & Yuste, R. (2012). The Brain Activity Map Project and the Challenge of Functional Connectomics. Neuron, 74(6), 970–974. doi:10.1016/j.neuron.2012.06.006
What does it mean, in science, to be open?
I don’t know.
I wrote a while back, that while I endorse the principles of ‘openness‘, I struggle with the issue of ‘how‘. Since then I have been trying to listen and learn. [Or, better said, shut up and listen.] I started trying to see what hurdles I encountered trying to work exclusively on Open Source Software. I joined the Learning4Content course at WikiEducator. I started looking into platforms that would fit my needs as an open lab notebook. I tried to follow the Open Science Summit. I listened hard at sessions at SciFoo Camp. I went to some New Zealand open data discussions. I became an Academic Editor at PLoS ONE. I joined the panel of the Creative Commons Aotearoa New Zealand.
And after several months of ‘listening’ the one thing that keeps popping in my head is:
kubke, you ain’t gonna figure it out by yourself.
The loudest message that I heard is, perhaps, that there is not a single, simple, one-size-fits-all answer, and that it just may come down to fumbling through until we figure it out.
So, I decided to fumble.
I am taking in Summer students this summer to work on a project that I will try to make as ‘open’ as possible.
I am leaning towards a few things:
- I am pretty sure I want to give Mahara a go as a platform for the day-to-day ‘lab’ stuff.
- I am pretty sure I want to regularly put as much as I can into my space in OpenWetWare.
- I am pretty sure I want to try to shift my imaging to Open Source Software (e.g., Osirix, ImageJ, Cell Profiler)
- am pretty sure I want to put the work out there as it is being gathered.
What I am not so sure about is how this will work. It will be a steep learning curve, but one thing that I am hoping is that by giving it a go I may begin to get the answers.
And hopefully some of the smart people out there might give me a hand and help me steer the boat in the right direction.
I have always been fascinated by the series of studies in electrophysiology that led to our current understanding of how electrical signalling takes place in neurons. And no collection of classical electrophysiology is complete without the 1952 article by AL Hodgkin and AF Huxley on the sodium and potassium currents in the giant axon of the squid.
Saying that Hodgkin and Huxley were brilliant minds would be an understatement. But I was always fascinated by the following phrase in this paper:
‘These results support the view that depolarization leads to a rapid increase in permeability which allows sodium ions to move in either direction through the membrane.’
The reason it fascinates me is that this phrase would not look out-of-place in any modern neurophysiology textbook. But the state of knowledge at the time about how cell membranes were organised was quite different to that of today. Back at that time, cell membranes were thought to be formed by a layer of lipids ‘sandwiched’ between 2 layers of proteins. That meant that for ions to move in and out of the membrane they would have to break through the protein layers and move through the non-aqueous fatty acid layer (something that would be thermodynamically hard for ions to do). Or, something had to ‘open up’ in the membrane to create an aqueous path for the ions to move.
The idea of pores was not foreign to cell biologists at the time, but the demands of Hodgkin and Huxley’s model of ionic movement in neurons could not be easily reconciled with the (then) current model of the cell membrane structure. Hodgkin and Huxley knew ions had to move rapidly and selectively and that the properties of the membrane changed dynamically for this to happen.
In 1972 Singer and Nicolson published a classic model of the cell membrane. In it they propose that rather than ‘sandwiching’ the lipids, proteins are found in the membranes in two forms: as partially embedded proteins, or as intrinsic proteins that traverse the entirety of the cell membrane. It would not take long to see how these intrinsic proteins could form aqueous channels that would allow ions to move from one side to the other of the membrane. That proteins were able to change their shape had already been shown, and so similar mechanisms could be envisioned for the gating of ion channels.
Neurophysiology would never be the same. By 1976 Neher and Sackmann had published their patch clamp method which allowed them to record currents from single channels (and later won them the Nobel Prize), and only two years later Bertil Hille had written and extensive review on ion channels.
It has never been clear to me (or my friends) how much thought Hodgkin and Huxley put into the structure of the cell membrane and how their work fit into the models of the time. But I like to think that they did and chose to trust and follow their data, regardless of the conflicts and lack of sleep that may have raised for cell biologists.
- Hodgkin, A. L., & Huxley, A. F. (1952). Currents carried by sodium and potassium ions through the membrane of the giant axon of Loligo. The Journal of physiology, 116(4), 449.
- Singer, S. J., & Nicolson, G. L. (1972). The fluid mosaic model of the structure of cell membranes. Science (New York, N.Y.), 175(23), 720-731.
- Neher, E., & Sakmann, B. (1976). Single-channel currents recorded from membrane of denervated frog muscle fibres. Nature, 260(5554), 799-802. doi:10.1038/260799a0
- Hille, B. (1978). Ionic channels in excitable membranes. Current problems and biophysical approaches. Biophysical Journal, 22(2), 283-294. doi:10.1016/S0006-3495(78)85489-7
#SciFoo lightning talk [reloaded]
One of the articles we read in my biophysics class was a 1942 article by Curtis and Cole. At the time, those working on the electrical properties of neurons were in agreement that during the action potential the membrane did not simply ‘depolarize’ (i.e., lost its electrical polarization) but that it rather reverted its potential: during the action potential the inside of the neuron became more positive than the outside.
Researchers were looking at how this happened, and looking for the ions involved in setting up both the resting potential and the action potential.
In 1942 Curtis and Cole reported on an experiment in which they changed the extracellular concentration of potassium and measured the effects this had on resting and action potentials:
What they saw when they measured the amplitude of the action potential was that as they increased the concentration of potassium outside the cell, the amplitude of the action potential was reduced. But they failed to control for what turned up to be an important variable: Sodium. The way they reduced the concentration of potassium was by replacing it with sodium. Their data could be interpreted in two ways: that the amplitude of the action potential was decreased as potassium concentration was increased or that the amplitude of the action potential was decreased as sodium was decreased. This may not have been a huge oversight on their part given the state of knowledge of the time, but turned out to be a big mistake (and one that they should have controlled for).
In 1949 they showed that the ion carrying the current during the action potential was indeed sodium, something that would become known as the sodium hypothesis. Future work by Hodgkin and co-workers would define the mathematical functions that described the electrical properties of neurons, models that continue to be used today.
In 1963 Hodgkin shared the Nobel prize with his collaborator Andrew Huxley and John Eccles. My friends from the biophysics course always wondered how things would have turned out had Curtis and Cole realized the effect of sodium.
- Curtis, HJ and Cole KS (1942) Membrane Resting and Action Potentials from the Squid Giant Axon. Journal of Cellular and Comparative Physiology Vol 19 (2) 135-144
- Hodgkin AL and Katz B (1949) The effect of Sodium Ions on the Electrical Activity of the Giant Axon of the Squid. J. Physiol. 108, 37-77 (PMID: 16991839)
#SciFoo lightning talk [reloaded]