What do brain machine interfaces and Open Science have in common?
They are two examples of concepts that I never thought I would get to see materialised in my lifetime. I was wrong.
I had heard of the idea of Open Access as Public Library of Science was about to launch (or was in its early infancy) . It was about that time that I moved to New Zealand and was not able to go to conferences as frequently as I did in the USA, and couldn’t afford having an internet connection at home. Email communication (especially when limited to work hours) does not promote the same kind of chitter-chatter you might have as you wait in cue for your coffee – and so my work moved along, somewhat oblivious to what was going to become a big focus for me later on: Open Science.
About 6 years ofter moving to New Zealand things changed. Over a coffee with Nat Torkington, I became aware of some examples of people working in science embracing a more open attitude. This conversation had a big impact on me. Someone whom I never met before described me a whole different way of doing science. This resonated (strongly) because what he described were the ideals I had at the start of my journey; ideals that were slowly eroded by the demands of the system around me. By 2009 I had found a strong group of people internationally that were working to make this happen, and who inspired me to try to do something locally. And the rest is history.
What resonated with me about “Open Science” is the notion that knowledge is not ours to keep – that it belongs in the public domain where it can be a driver for change. I went to a free of fees University and we fought hard to keep it that way. Knowledge was a right and sharing knowledge was our duty. I moved along my career in parallel with shrinking funding pots and a trend towards academic commodification. The publish or perish mentality, the fears of being back-stabbed if one shares to early or too often, the idea of the research article placed in the “well-branded” journal, and the “paper” as a measure of one’s worth as a scientist all conspire to detract us from exploring open collaborative spaces. The world I walked into around 2009 was seeking to do away with all this nonsense. I have tried to listen and learn as much as I can, sometimes I even dared to put in my 2 cents or ask questions.
How to make it happen?
The biggest hurdle I have found is that I don’t do my work in isolation. As much as I might want to embrace Open Science, when the work is collaborative I am not the one that makes the final call. In a country as small as New Zealand it is difficult to find the critical mass at the intersection of my research interests (and knowledge) and the desire to do work in the open space. If you want to collaborate with the best, you may not be able to be picky on the shared ethos. This is particularly true for those struggling with building a career and getting a permanent position, the advice of those at the hiring table will always sound louder.
The reward system seems at times to be stuck in a place where incentives are (at all levels) stacked against Open Science; “rewards” are distributed at the “researcher” level. Open Research is about a solution to a problem, not to someone’s career advancement (although that should come as a side-effect). It is not surprising then how little value is placed in whether one’s science can be replicated or re-used. Once the paper is out and the bean drops in the jar, our work is done. I doubt that even staffing committees or those evaluating us will even care about pulling those research outputs and reading them to assess their value – if they did we would not need to have things like Impact Factors, h-index and the rest. And here is the irony – we struggle to brand our papers to satisfy a rewards system that will never look beyond its title. At the same time those who care about the content and want to reuse it are limited by whichever restrictions we chose to put at the time of publishing.
So what do we do?
I think we need to be sensitive to the struggle of those that might want to embrace open science, but are trying to negotiate the assessment requirements of their careers. Perhaps getting more people who embrace these principles at staffing and research University Committees might at least provide the opportunity to ask the right questions about “value” and at the right time. If we can get more open minded stances at the hiring level, this will go far in changing people’s attitudes at the bench.
I, for one, find myself in a relatively good position. My continuation was approved a few weeks ago, so I won’t need to face the staffing committee except for promotion. A change in title might be nice – but it is not a deal-breaker, like tenure. I have tried to open my workflow in the past, and learned enough from the experience, and will keep trying until I get it right. I am slowly seeing the shift in my colleagues’ attitudes – less rolling of eyes, a bit more curiosity. For now, let’s call that progress.
I came to meet in person many of those who inspired me through the online discussions since 2009, and they have always provided useful advice, but more importantly support. Turning my workflow to “Open” has been as hard as I anticipated. I have failed more than I have succeeded but always learned something from the experience. And one question that keeps me going is:
What did the public give you the money for?
“Successful human-to-human brain interface” screamed the headlines – and so there I was clicking my way around the internet to read about it.
Those who know me also know that this is the kind of stuff what makes me tick, ever since learning about the pioneering work of Miguel Nicolelis. A bit over a decade ago I first heard of him, a Brazilian scientist working at Duke University in the Department where I spent a short tenure before moving to New Zealand. What I heard at the time was that he was attempting to extract signals from a brain and use them to control a robotic arm. I was quite puzzled by the proposition, I had been trained with the idea that each neuron in the brain is important and responsible of taking care of a specific bit of information. so thought I’d never get to see the idea succeed within my lifetime.
Nicolelis’ paradigm was relatively straightforward. He was to record the activity of a small area of the brain while the animal moved his arm, and identify what was going on in the brain during different arm movements. Activity combination A means arm up, combination B arm down, etc. He then would use this code to program a robotic arm so that the robotic it moved up when combination A was sent to it, down when combination B was sent, and so on. The third step was to connect the actual live brain to the robotic arm, and have the monkey learn that it had the power to move it himself.
What puzzled me at the time (and the reason that I thought his experiment couldn’t work) was that he was going to attempt to do this by recording the activity from what I could best describe as only a handful of neurons, and with rather limited control over the choice of those neurons. I figured this was not going to give him enough (or even the right) information to guide the movement of the robotic arm. But I was still really attracted to the idea. Not only did I love his deliberate imagination and how he was thinking outside the box,, but also, if he was successful, it would mean I’d have to start thinking about how the brain works in a completely different way.
It was not long before the word came out he had done it. He had managed to extract enough code from the brain activity that was going on during arm movements to program the robotic arm, and soon enough he had the monkey control the arm directly. And then something even much more interesting (at least to me) happened – the monkey learned that he could move the robotic arm without having to move his own arm. In other words, the monkey had ‘mapped’ the robotic arm into his brain as if it was his own. And that meant that it was time to revisit how I thought that brains worked.
I followed his work, and then in 2010 got a chance to have a chat with him at SciFoo. It was there that he told me how he was doing similar experiments but playing with avatars instead of real life robotic arms. how he saw this technology being used to build exoskeletons to provide mobility to paralyzed patients, and how he thought he was close to getting a brain to brain interface in rats.
A brain to brain interface?
Well, if the first set of experiments had challenged my thinking I was up for a new intellectual journey. Although by now I had learned my lesson.
I finally got to see the published results of these experiment earlier this year. Again, the proposition was straightforward. Have a rat learn a task in one room, collect the code and send that information to a second rat elsewhere and see if the second rat has been able to capture the learning. You can read more about this experiment from Mo Costandi here.
So when I heard the news about human to human brain interfaces, I inevitably got excited.
The paradigm of this preliminary study (which has not been published in a peer reviewed journal) is simple. One person is trying to play a video game imagining he pushes a firing button at the right time, and a second person elsewhere who actually needs to push the firing button for the game. The activity from the brain of the first person (this time recorded from the scalp surface) is transmitted to the brain of the second person through a magnetic coil (a device that is becoming commonly used to stimulate or inhibit specific parts of the brain.)
But is this really a bran to brain intterface?
Although the brain code of the first subject ‘imagining’ moving the finger was extracted (much like the Nicolelis group did back a decade ago), there is nothing about that code that is ‘decoded’ by the subject pressing the button. That magnetic coils can be used to elicit movement is not new. What part of the body moves depends on where on top of the head the coil is placed, and the type of zapping that is sent through the coil. So reading their description of the experiment, it seems that the signal that is being sent is a turn on/off to the coil, not a motor code in itself. The response from the second subject does not seem to need the decoding that signal – rather responding to a specific stimulation (not too unlike the kicking we do when someone tests our knee jerk reflex, or closing our eyelids when someone shines a bright light at our eyes).
I am also uncertain of how much the second subject knows about the experiment and I can’t help but wonder how much of the movement is self generated in response to the firing of the coil. Any awake person participating whose finger is put on top of a keyboard key and has a piece of metal on their head wouldn’t take too long to figure out how the experiment is meant to run.
Which brings me back to the title of this post.
There is nothing wrong with sharing the group’s progress, In fact I think it is great, and I wish more of us were doing this. But I am less clear about what is so novel and what it contribute to our understanding of how the brain works to justify the hype.
This is a missed opportunity. There is value in their press release: here is a group that is sharing preliminary data in a very open way. This in itself is the news because this is good for science This should have been the hype.
Did you know?
- In 1978 a machine to brain interface (says Wikipedia) was successfully tested in a blind patient. Apparently progress was hindered by the patient needing to be connected to a large mainframe computer
- By 2006 a patient was able to operate a computer mouse and prosthetic hand using a brain machine interface that recorded brain activity using electrodes placed inside the brain. Watch the video.
- In 2009 using brain activity recorded from surface scalp electrodes to control a computer text editor, a scientist was able to send a tweet
- Carmena, J. M., Lebedev, M. A., Crist, R. E., O’Doherty, J. E., Santucci, D. M., Dimitrov, D. F., … Nicolelis, M. A. L. (2003). Learning to Control a Brain–Machine Interface for Reaching and Grasping by Primates. PLoS Biol, 1(2), e42. doi:10.1371/journal.pbio.0000042
- Pais-Vieira, M., Lebedev, M., Kunicki, C., Wang, J., & Nicolelis, M. A. L. (2013). A Brain-to-Brain Interface for Real-Time Sharing of Sensorimotor Information. Scientific Reports, 3. doi:10.1038/srep01319
- O’Doherty, J. E., Lebedev, M. A., Ifft, P. J., Zhuang, K. Z., Shokur, S., Bleuler, H., & Nicolelis, M. A. L. (2011). Active tactile exploration using a brain-machine-brain interface. Nature, 479(7372), 228–231. doi:10.1038/nature10489
…to file my Annual Performance Review.
Nothing makes me shiver as much as the Dean’s email reminding us that it is time to file our Annual Performance Reviews (APRs). This year shivering does not begin to express the feeling I got upon receiving that email.
What have I achieved this year? ‘Nothing’ was the first thing that came to mind. This was followed by a profound state of panic!
But wait, there is more….
This has been probably the most difficult year of my entire life. Those who know me will also know that I have had really difficult years. So this is not a light statement. It has been filled by personal and professional crises, nights with no sleep, anxiety, and the health issues that come with all that. So back to my APR – Nothing. (This does not help my sleep issues)
Or so I thought until I realised I was looking for ‘measures of performance’ in the wrong places. So yes, the papers are still being written and haven’t been submitted, I haven’t attended any ‘scientific meeting’, I haven’t received any new grants. I could go on.
But crises did not just ‘happen’. Mine came about because this has been a year in which my way of thinking and doing things has been challenged to its roots. Deep, deep roots. So perhaps, I have a lack of sense of achievement because I am looking in the wrong places.
Sure. I didn’t go to any ‘scientific meetings’. But this is where I did go to: Science Online 201o, the Linux Conference, KiwiFoo, SciFoo, the Data Matters workshop, the eResearch conference. I also became an Academic Editor for PLoS ONE and became more engaged with the discussions about science on social networks like Twitter and FriendFeed. And I have to say, I learned more about ‘Science’ this year that in my entire career. And I was reminded not just of why I got into science in the first place, but also what kind of scientist I wanted to become.
I also attended couple of workshops and conferences on innovative teaching, I completed my first year in a degree in education, became involved with WikiEducator, and was reminded not only why I got into teaching in the first place, but also what kind of teacher I wanted to become.
I also became engaged with a variety of issues. From Public ACTA, and OpenLabour, to olpc and Creative Commons. And I was reminded of the kind of citizen I thought I was to become.
I guess with great moral crises also comes great change. So I am actually looking forward to next year, when I hope that all the struggle of 2010 will pay off in the form of positive change and positive action.
To all of you out there that gave me the chance to talk to you, who offered your ideas and listened to my ramblings, who helped me organize my thoughts, formulate my goals and provided me with guidance and support, my most sincere Thank You.
As for my APR, it will be hard to fill. Can I just say:
‘This year I learned’?
What does it mean, in science, to be open?
I don’t know.
I wrote a while back, that while I endorse the principles of ‘openness‘, I struggle with the issue of ‘how‘. Since then I have been trying to listen and learn. [Or, better said, shut up and listen.] I started trying to see what hurdles I encountered trying to work exclusively on Open Source Software. I joined the Learning4Content course at WikiEducator. I started looking into platforms that would fit my needs as an open lab notebook. I tried to follow the Open Science Summit. I listened hard at sessions at SciFoo Camp. I went to some New Zealand open data discussions. I became an Academic Editor at PLoS ONE. I joined the panel of the Creative Commons Aotearoa New Zealand.
And after several months of ‘listening’ the one thing that keeps popping in my head is:
kubke, you ain’t gonna figure it out by yourself.
The loudest message that I heard is, perhaps, that there is not a single, simple, one-size-fits-all answer, and that it just may come down to fumbling through until we figure it out.
So, I decided to fumble.
I am taking in Summer students this summer to work on a project that I will try to make as ‘open’ as possible.
I am leaning towards a few things:
- I am pretty sure I want to give Mahara a go as a platform for the day-to-day ‘lab’ stuff.
- I am pretty sure I want to regularly put as much as I can into my space in OpenWetWare.
- I am pretty sure I want to try to shift my imaging to Open Source Software (e.g., Osirix, ImageJ, Cell Profiler)
- am pretty sure I want to put the work out there as it is being gathered.
What I am not so sure about is how this will work. It will be a steep learning curve, but one thing that I am hoping is that by giving it a go I may begin to get the answers.
And hopefully some of the smart people out there might give me a hand and help me steer the boat in the right direction.
I have always been fascinated by the series of studies in electrophysiology that led to our current understanding of how electrical signalling takes place in neurons. And no collection of classical electrophysiology is complete without the 1952 article by AL Hodgkin and AF Huxley on the sodium and potassium currents in the giant axon of the squid.
Saying that Hodgkin and Huxley were brilliant minds would be an understatement. But I was always fascinated by the following phrase in this paper:
‘These results support the view that depolarization leads to a rapid increase in permeability which allows sodium ions to move in either direction through the membrane.’
The reason it fascinates me is that this phrase would not look out-of-place in any modern neurophysiology textbook. But the state of knowledge at the time about how cell membranes were organised was quite different to that of today. Back at that time, cell membranes were thought to be formed by a layer of lipids ‘sandwiched’ between 2 layers of proteins. That meant that for ions to move in and out of the membrane they would have to break through the protein layers and move through the non-aqueous fatty acid layer (something that would be thermodynamically hard for ions to do). Or, something had to ‘open up’ in the membrane to create an aqueous path for the ions to move.
The idea of pores was not foreign to cell biologists at the time, but the demands of Hodgkin and Huxley’s model of ionic movement in neurons could not be easily reconciled with the (then) current model of the cell membrane structure. Hodgkin and Huxley knew ions had to move rapidly and selectively and that the properties of the membrane changed dynamically for this to happen.
In 1972 Singer and Nicolson published a classic model of the cell membrane. In it they propose that rather than ‘sandwiching’ the lipids, proteins are found in the membranes in two forms: as partially embedded proteins, or as intrinsic proteins that traverse the entirety of the cell membrane. It would not take long to see how these intrinsic proteins could form aqueous channels that would allow ions to move from one side to the other of the membrane. That proteins were able to change their shape had already been shown, and so similar mechanisms could be envisioned for the gating of ion channels.
Neurophysiology would never be the same. By 1976 Neher and Sackmann had published their patch clamp method which allowed them to record currents from single channels (and later won them the Nobel Prize), and only two years later Bertil Hille had written and extensive review on ion channels.
It has never been clear to me (or my friends) how much thought Hodgkin and Huxley put into the structure of the cell membrane and how their work fit into the models of the time. But I like to think that they did and chose to trust and follow their data, regardless of the conflicts and lack of sleep that may have raised for cell biologists.
- Hodgkin, A. L., & Huxley, A. F. (1952). Currents carried by sodium and potassium ions through the membrane of the giant axon of Loligo. The Journal of physiology, 116(4), 449.
- Singer, S. J., & Nicolson, G. L. (1972). The fluid mosaic model of the structure of cell membranes. Science (New York, N.Y.), 175(23), 720-731.
- Neher, E., & Sakmann, B. (1976). Single-channel currents recorded from membrane of denervated frog muscle fibres. Nature, 260(5554), 799-802. doi:10.1038/260799a0
- Hille, B. (1978). Ionic channels in excitable membranes. Current problems and biophysical approaches. Biophysical Journal, 22(2), 283-294. doi:10.1016/S0006-3495(78)85489-7
#SciFoo lightning talk [reloaded]