I have always been fascinated by the series of studies in electrophysiology that led to our current understanding of how electrical signalling takes place in neurons. And no collection of classical electrophysiology is complete without the 1952 article by AL Hodgkin and AF Huxley on the sodium and potassium currents in the giant axon of the squid.
Saying that Hodgkin and Huxley were brilliant minds would be an understatement. But I was always fascinated by the following phrase in this paper:
‘These results support the view that depolarization leads to a rapid increase in permeability which allows sodium ions to move in either direction through the membrane.’
The reason it fascinates me is that this phrase would not look out-of-place in any modern neurophysiology textbook. But the state of knowledge at the time about how cell membranes were organised was quite different to that of today. Back at that time, cell membranes were thought to be formed by a layer of lipids ‘sandwiched’ between 2 layers of proteins. That meant that for ions to move in and out of the membrane they would have to break through the protein layers and move through the non-aqueous fatty acid layer (something that would be thermodynamically hard for ions to do). Or, something had to ‘open up’ in the membrane to create an aqueous path for the ions to move.
The idea of pores was not foreign to cell biologists at the time, but the demands of Hodgkin and Huxley’s model of ionic movement in neurons could not be easily reconciled with the (then) current model of the cell membrane structure. Hodgkin and Huxley knew ions had to move rapidly and selectively and that the properties of the membrane changed dynamically for this to happen.
In 1972 Singer and Nicolson published a classic model of the cell membrane. In it they propose that rather than ‘sandwiching’ the lipids, proteins are found in the membranes in two forms: as partially embedded proteins, or as intrinsic proteins that traverse the entirety of the cell membrane. It would not take long to see how these intrinsic proteins could form aqueous channels that would allow ions to move from one side to the other of the membrane. That proteins were able to change their shape had already been shown, and so similar mechanisms could be envisioned for the gating of ion channels.
Neurophysiology would never be the same. By 1976 Neher and Sackmann had published their patch clamp method which allowed them to record currents from single channels (and later won them the Nobel Prize), and only two years later Bertil Hille had written and extensive review on ion channels.
It has never been clear to me (or my friends) how much thought Hodgkin and Huxley put into the structure of the cell membrane and how their work fit into the models of the time. But I like to think that they did and chose to trust and follow their data, regardless of the conflicts and lack of sleep that may have raised for cell biologists.
- Hodgkin, A. L., & Huxley, A. F. (1952). Currents carried by sodium and potassium ions through the membrane of the giant axon of Loligo. The Journal of physiology, 116(4), 449.
- Singer, S. J., & Nicolson, G. L. (1972). The fluid mosaic model of the structure of cell membranes. Science (New York, N.Y.), 175(23), 720-731.
- Neher, E., & Sakmann, B. (1976). Single-channel currents recorded from membrane of denervated frog muscle fibres. Nature, 260(5554), 799-802. doi:10.1038/260799a0
- Hille, B. (1978). Ionic channels in excitable membranes. Current problems and biophysical approaches. Biophysical Journal, 22(2), 283-294. doi:10.1016/S0006-3495(78)85489-7
#SciFoo lightning talk [reloaded]
One of the articles we read in my biophysics class was a 1942 article by Curtis and Cole. At the time, those working on the electrical properties of neurons were in agreement that during the action potential the membrane did not simply ‘depolarize’ (i.e., lost its electrical polarization) but that it rather reverted its potential: during the action potential the inside of the neuron became more positive than the outside.
Researchers were looking at how this happened, and looking for the ions involved in setting up both the resting potential and the action potential.
In 1942 Curtis and Cole reported on an experiment in which they changed the extracellular concentration of potassium and measured the effects this had on resting and action potentials:
What they saw when they measured the amplitude of the action potential was that as they increased the concentration of potassium outside the cell, the amplitude of the action potential was reduced. But they failed to control for what turned up to be an important variable: Sodium. The way they reduced the concentration of potassium was by replacing it with sodium. Their data could be interpreted in two ways: that the amplitude of the action potential was decreased as potassium concentration was increased or that the amplitude of the action potential was decreased as sodium was decreased. This may not have been a huge oversight on their part given the state of knowledge of the time, but turned out to be a big mistake (and one that they should have controlled for).
In 1949 they showed that the ion carrying the current during the action potential was indeed sodium, something that would become known as the sodium hypothesis. Future work by Hodgkin and co-workers would define the mathematical functions that described the electrical properties of neurons, models that continue to be used today.
In 1963 Hodgkin shared the Nobel prize with his collaborator Andrew Huxley and John Eccles. My friends from the biophysics course always wondered how things would have turned out had Curtis and Cole realized the effect of sodium.
- Curtis, HJ and Cole KS (1942) Membrane Resting and Action Potentials from the Squid Giant Axon. Journal of Cellular and Comparative Physiology Vol 19 (2) 135-144
- Hodgkin AL and Katz B (1949) The effect of Sodium Ions on the Electrical Activity of the Giant Axon of the Squid. J. Physiol. 108, 37-77 (PMID: 16991839)
#SciFoo lightning talk [reloaded]
News hit the stands about a new research collaboration to find biological markers for Alzheimer’s disease (read the stories in the New York Times and the Wall Street Journal). (HT @atreolar on Twitter). One thing that sets this collaboration apart was that the work being done would have researchers
“share all the data, making every single finding public immediately, available to anyone with a computer anywhere in the world.”
The advantages of sharing data were made clear with respect to this project in the article:
“Different people using different methods on different subjects in different places were getting different results, which is not surprising. What was needed was to get everyone together and to get a common data set.”
And this is a very strong argument for data sharing. But as interesting as the story itself is, I find more interesting some of the issues it identified with respect to scientists sharing data at such a wide scale. Specifically this paragraph brought back some things to mind:
“At first, the collaboration struck many scientists as worrisome — they would be giving up ownership of data, and anyone could use it, publish papers, maybe even misinterpret it and publish information that was wrong. “
This (with different grammatic construction) is the argument floating around. We (scientists) may all see the advantage of data sharing, but are we willing to ‘give it up’?
If you ask scientists many of us would probably say that we do science for a specific purpose: try to help find a cure for a disease, solve some environmental problem, to contribute to human culture through the creation of knowledge. Data sharing makes us put our money where our mouths are.
But is it that easy?
I would argue it isn’t. Even when we may be willing to put our data out there, to have others use it and interpret it, there is a reality we still need to face: our hiring and promotion committees. And these look at our scientific output as ‘papers published’.
There has been a lot of chatter on what the values of the papers are: should impact factor matter? Should we be looking at article level metrics? But either still look at the papers. Should we stop valuing papers and start valuing datasets?
I brought this issue up at the Data Matters MoRST meeting I attended. The current PBRF system is incompatible with data sharing. It still measures ‘output’ as individual papers. And whether I like it or not, my University’s funding (and my ability to survive in the system) depends on me satisfying these criteria. So to promote data sharing, this too needs to change.
I wonder what would happen next time I apply for promotion if instead of listing my publications on my CV I were to list my ‘datasets’: This is the data I have generated (and made public), and this is how it has been used by me and by others. Wouldn’t that be a real measure of the impact of my work? Does it really matter ‘who’ used the data to advance knowledge? Or in other words, has the time come for ‘Data Level Metrics’?
Perhaps if we gave data the same hierarchy as papers when it comes to evaluating performance, people may quickly learn that by putting the data out there the impact of our work may be easily increased (and measured). And we may be quicker to put it out.
On other news:
The Open Science Summit‘s opening session are now online thanks to ForaTV. It was a great opening session to be at, and I am glad I managed to make it there. Unfortunately I wasn’t able to stay for the rest of the meeting.
At the same time that this was happening, the government of New Zeland released its Open Access and Licencing Framework (NZGOAL). You can read about it on the Open Knowledge Foundation website, which has links to all of the documents. This is indeed good news for data sharing in New Zealand. And when I returned from my trip I found an email from The Creative Commons Aotearoa New Zealand informing me that I had been selected as a member of the CCANZ Advisory Panel.
I want to thank CCANZ for allowing me to be part of this panel, it is indeed an honour and I look forward to the good things that promise to come out of it.
At SciFoo I got a chance to give a lightning talk. These are 5 minute talks, similar to Ignites and PechaKuchas. It is fair to say that is was nerve-wracking! And 5 minutes seem like an eternity when you are that nervous!
But I am rehashing it here as an extended version of the 5 minutes, 2 or 3 slides at a time, over several posts.
[Slides 1 and 2]
When I started University in Argentina, there wasn’t a neuroscience programme. (I had actually gotten interested in science after reading the microbe hunters as a kid, so I should have really been a microbiologist.) But neuroscience was taught pretty much in every course, and I became fascinated with it, and by the middle of my 6 year undergraduate I had joined a research group to study brain development.
At about that time, I took a course in biophysics. One of the best I think I took and loved it. It was common back then that all courses had several hours a week dedicated to reading and discussing (and dissecting!) the primary literature. But biophysics did something different. We didn’t just read *the* papers, but also all of the work that led to those significant papers. And the results were discussed taking into account the historical context during which they were obtained.
This was really interesting for two reasons: First, we were not only learning a discipline but also the evolution of ideas within the discipline: the evolution of scientific thought. Second, it gave me an appreciation of the treasures that were hidden in old volumes of journals.
I think I owe to this course my love for the history of science, and my eagerness to blow off the dust of old journal covers in search of science gems. In the process, I have come to realise that many of what we may consider new or groundbreaking results, are actually answering questions that were posed long ago.
Throughout the history of science I find heaps of questions that remain unanswered waiting for the development of technology that lets scientists take the next step. Some of these questions resurface, many times without reference to the original ideas, some remain buried waiting to be rediscovered by someone willing to browse through old archives and willing to reexamine them with modern tools.
I decided to talk about this because this is something I love about science: that serendipitous marriage between scientific ideas and technological development, which I also think aligns with the spirit of SciFoo.
I was lucky enough to be invited to SciFoo this year, which proved to be a wonderful experience. SciFoo is an unconference organised by O’Reilly media, Nature and Google. It brings together a group of sciencey people to talk about science, and I cannot describe the level of awesome that I experienced while I was there.
I went well-prepared: I had read the blogs of the attendees who blog, read their descriptions of themselves, contributed to the suggested sessions in the wiki, and showed up with a list of ‘must-meet’ people and ‘must attend’ sessions to make sure I made the most of it.
But (and I learned that this happens after having attended 2 KiwiFoos), I might as well not done any of that homework. Because, apart from a couple of exceptions, I never got a chance to talk to the people on my list. Nor did I end up going to any of the sessions I thought I would go to. Instead, I found myself being pulled to ‘other’ people and ‘other’ sessions. And I guess that is the beauty of it all. Meeting people and hearing interesting things that were not necessarily on my radar.
I started by attending two Lightning Talk sessions, moderated by Nat Torkington. Lightning talks are 5 minute presentations, which were great because it gave me the chance to hear about lots of different stuff and from very different people (which also explains why my original list ended up being useless). I was drawn to the third lightning talks session the next day. There I heard about the relationship between scientists and music from Eva Amsden, what we can learn about people by asking them how they played as children from Linda Stone, neuroscience and law from David Eagleman and many other mind tickling topics.
These are some of the other sessions I attended:
RuleCamp: Basically about rules to follow to do stuff. Carl Zimmer, one of the speakers, summarised the session in his blog, so I will send you there to read his notes (which are much better than mine!)
Brain Machine Interfaces: I seem to have a fetish with BMIs, and the work of Miguel Nicolelis in this area changed the way that I think about the brain. So I couldn’t miss this one (especially since Nicolelis was there too!). I will be writing a bit more about this at a later time, but it is totally worth it to read about his research in his page. Most of all, I was seriously impressed with not only how far BMIs have gone, but how this kind of research is making us think about the brain in a very different way.
Collaborative Science: This was fun, and I mean that in a literal way. Because among other things discussed, FoldIt came up. Yes, you can contribute to science by playing games. And in the process you end up being acknowledged as an author on a Nature paper.
I went to many other interesting sessions and had amazing scattered chats with different people throughout SciFoo. It was great to see old friends and acquaintances, and make new connections. But one thing I learned at Kiwi Foo, is that as amazing as the few days of the event are, what is really more amazing is what happens ‘between’ Foos. There is a whole year ahead, and I can’t wait to see what comes out of it.
(I have to give a special thanks to Nat Torkington and Cat Allman, who I am sure had a hand in getting me there, and also to Eva Amsen for wonderful personal swag from The Node.)