Building Blogs of Science

Failure to replicate, spoiled milk and unspilled beans

Posted in Science by kubke on September 6, 2013

Try entering “failure to replicate” in a google search (or better still, let me do that for you) and you will find no shortage of hits. You can even find a reproducibility initiative. Nature has a whole set of articles on the topic. If you live in New Zealand you have probably not escaped the coverage in the news about the botulism bacteria that never was, and you might be among those puzzled about how a lab test could be so “wrong”.

Yet, for scientists working in labs, this issue is commonplace.

Most scientists will acknowledge that reproducing someone else’s published results isn’t always easy. Most will also acknowledge that there they would receive little recognition for replicating someone else’s results. They may even add that the barriers to publish negative results are also too high.  The bottom line is that there is little incentive to encourage replication, more so in a narrowing and highly competitive funding ecosystem.

However, some kind of replication happens almost on a daily basis in our labs as we adopt techniques described by  others and try to adapt them to our own studies. A lot of time and money can be wasted when the original article does not provide enough detail on the materials and methods. Sometimes authors (consciously or unconsciously) do not articulate explicitly domain-specific tacit knowledge about their procedures, something which may not be easy to resolve. But in other cases, articles just simply lack enough detail about what specific reagents were used in an experiment, like a catalog number, and this is something may be able to fix more easily.

Making explicit the experiment’s reagents would should be quite straightforward, but apparently it is not, at least according to the new study published in PeerJ*. Vasilevsky and her colleagues surveyed articles in a number of journals and from different disciplines and recorded how well documented the raw materials used in the experiments were described. In other words, could anyone, relying solely on information provided in the article, be sure they would be buying the exact same chemical?

Simple enough? Yeah, right.

What their data exposed was a rather sad state of affairs. Based on their sample they concluded that  the reporting of “unique identifiers” for laboratory materials is rather poor and they could only unambiguously identify 56% of the resources. Overall, just a little over half of the articles don’t give enough information for proper replication. Look:

Replicabitily1

But not all research papers are created equal. A breakdown by research discipline and by type of resource shows that some areas or types of reagents do better than others. Papers in immunology, for example tend to report better than papers in neuroscience.

So, could  journals for immunology be better quality or have higher standards  than the journals for neuroscience?

The authors probably knew we would ask that, and they beat us to the punch.

(Note: Apparently, the IF does not seem to matter when it comes to the quality of reporting on materials**. )

What I found particularly interesting was that whether a  journal had good guidelines on reporting didn’t seem to make much of a difference. It appears the problem is more deeply rooted and these seeping through the submission, peer review and editorial process. How come neither authors, reviewers or editors are making sure that the reporting guidelines are followed? (Which in my opinion beats the purpose of having them there in the first place!)

replicability2

I am not sure I perform myself too much above average (I must confess I am too scared to look!). As authors we may be somewhat  blind to how well (or not) we articulate our findings because we are too embedded in the work, missing things that may be obvious to others. Peer reviewers and editors tend to pick up on our blind spots much better than us. Yet apparently a lot that still does not get picked up. Peer-reviewers don’t seem to be picking up on these reporting issues, perhaps they make assumptions based on what is standard in their their particular field of work. Editors may not detect what is missing because they are relying on the peer-review process to identify reporting shortcomings especially when the work is outside their field of expertise. But while I can see how not getting it right can happen, I also see the need to get it right.

While I think all journals should have clear guidelines for reporting materials (the authors developed a set of guidelines that can be found here), Vasilevsky and her colleagues showed that having them in place was not necessarily enough. Checklists similar to those put out by Nature [pdf] to help authors, reviewers and editors might help to minimise the problem.

I would, of course, love to see this study replicated. In the meantime I might give a go at playing with the data.

*Disclosure: I am an academic editor, author and reviewer for PeerJ and obtained early access to this article.

** no, I will not go down this rabbit hole

Vasilevsky et al. (2013), On the reproducibility of science: unique identification of research resources in the biomedical literature. PeerJ 1:e148; DOI 10.7717/peerj.148

Getting up to speed with sound localisation

Posted in Science by kubke on February 25, 2010

ResearchBlogging.orgFunny how we are really good, for the most part, at knowing where sounds are coming from. And it is funny since the ear provides the brain with no direct information about the actual relationship in space of different sound sources. Instead, the brain makes use of what happens to the sound as it reaches both ears by virtue of, well, being a sound wave and that we have two ears separated in space.

Imagine a sound coming from the front, the sound will arrive to the two ears at the same time. But if it is coming from the right it will arrive to the right ear first, and to the left ear a wee later. This ‘time difference‘ will depend on the speed of sound in air and how far apart our ears are. Even more, as the sound source moves from the far right to the front of the head those time differences will become smaller and smaller, until they are zero at the front. If one could put one microphone in each ear, one could reliably predict where the sound comes from by measuring that time difference. And this is exactly what a group of neurons in the brain does.

Easy enough? Not quite.

The way the brain works is that things on the left side of our body are mapped on the right side of our brains, and things on the right side of our bodies are mapped on the left side of our brains. So the ‘time comparison’ neurons on the right side of the brain deal mainly with sound from coming from the left (and neurons dealing with the sound from the right are on the left side of the brain). But to do the time comparison these neurons need to get the information from both ears, not just from only one side!

Figure 1 (by Kubke CC-BY)

This raises this conundrum: the neural path that the information from the left ear needs to travel to get to the same (left) side of the brain will inevitably be shorter than the path travelled by information coming from the other side of the head. So how does the brain overcome this mis-match?

And here is where having paid attention at school during the  “two trains travelling at the same speed leave two different stations blah blah blah” math problem finally pays off. When a sound comes from the front, the information arrives to each of the ears at the same time. The information also arrives to the first station in the brain (nucleus magnocellularis) at the same time. But time comparison neurons need information from both ears, and the path that the information needs to travel from the right side to the time comparison neurons in nucleus laminaris on the left side (red arrow in figure 1) is longer than the path from the same side (blue arrow in figure 1).

However, when you look into an actual brain, things are not so straight-forward (sorry for the pun). The axons from nucleus magnocellularis that go to the time comparison neurons on the same side of the brain take a rather roundabout route (as in figure 2). And for long we assumed that such roundabout way was enough to make signals from the left and right sides to arrive at about the same time.

Figure 2 (by Kubke CC-BY)

Easy enough? Not quite

When Seidl, Rubel and Harris actually measured the length of the axons (red and blue) they found that there was no way that the information could arrive at about the same time and that the system could not work in the biological range. But this problem could be overcome (back to the old school problem) by having the two trains (action potentials rather) travel at different speeds. And this is something that neurons in the brain can relatively easily do in two ways: One is to change the girth or diameter of the axon. The other is to regulate how they are myelinated. Myelin forms a discontinuous insulating wrap around the axon, which is interrupted at what is called the Nodes of Ranvier. The closer the Nodes of Ranvier are, the slower the action potential travels down the axon.

What the group found was that both axon diameter and myelination pattern were different in the direct (blue) and crossed (red) axons. When they now calculated how long it would take for the action potential from both sides to reach the time comparison neurons in nucleus laminaris, adjusting speed for the differences in the two axons, they found that yup, that pretty much solved the problem.

Easy enough? Quite

Like the authors say:

The regulation of these axonal parameters within individual axons seems quite remarkable from a cell biological point of view, but it is not unprecedented.

But remarkable indeed, considering that this regulation needs to adjust to a very high degree of temporal precision. I have always used the train analogy when I lecture about sound localisation, and always assumed equal speed on both sides. Seidl, Rubel and Harris’ work means I will have to redo my slides to incorporate differences in speed. Hope my students don’t end up hating me!


Seidl, A., Rubel, E., & Harris, D. (2010). Mechanisms for Adjusting Interaural Time Differences to Achieve Binaural Coincidence Detection Journal of Neuroscience, 30 (1), 70-80 DOI: 10.1523/JNEUROSCI.3464-09.2010

The ever-changing world of dendritic spines

Posted in Science by kubke on December 18, 2009

ResearchBlogging.orgSantiago Ramón y Cajal originally described spines in the dendrites of neurons in the cerebellum back in the late 19th century, but it wasn’t until the mid 1950’s with the development of the electron microscope that these structures were shown to be synaptic structures. Although it has been known that the number of dendritic spines changes during development and in association with learning, most studies have inferred the changes by looking at static time points rather than monitoring individual spines in the same animal over time, partly, due to the difficulty of tracking a single structure of about 0.1 micrometer in size (0.0001 mm). But new advances in imaging technology have allowed researchers to ‘follow’ individual spines over time both in vitro and in the whole animal.

Purkinje Cell by S Ramon y Cajal

Dendritic spines are no longer thought of as the static structures of Ramón y Cajal’s (or even my) generation, but rather dynamic structures that can be added and eliminated from individual dendrites. And because each spine is associated with a synaptic input, and because their structure and dynamic turnover is known to have a profound effect on neuronal signaling, one cannot but be tempted to propose that they are associated with specific aspects of memory formation.

Two developments have made it possible to monitor individual dendritic spines at different time points in the same animal: the ability to incorporate fluorescent molecules into transgenic mice that make the spines visible under fluorescent illumination, and the development of in vivo transcranial two photon imaging that allow researchers to go back to that individual dendrite and monitor how the dendritic spines change over time. Two papers published in Nature make use of these techniques to look at how dendritic spines change in the motor cortex of mice that have learned a motor task.

In one, Guang Yang, Feng Pan and Wen-Biao Gan looked at how spines changed when either young or adult mice were trained in to learn specific motor strategies. They observed that spines underwent significant turnover, but that learning the motor task increased the overall number of new spines and that a small proportion of them could persist for long periods of time. They calculated that although most of the newly formed spines only remained for about a day and a half, a smaller fractions of them could still persist for either a couple of months or a few years. Based on their data they suggest that about 0.04% of the newly formed spines could contribute to lifelong memory.

Dendritic spine by Tmhoogland

Another study by Tonghui Xu, Xinzhu Yu, Andrew J. Perlik, Willie F. Tobin, Jonathan A. Zweig, Kelly Tennant, Theresa Jones and Yi Zuo did a similar experiment, but using a different motor training task. Like the Yang group, they also saw that training leads to both the formation and elimination of spines. Although newly formed spines are initially unstable, a few of them can become stabilized and persist longer term. Further, training made newly formed spines more stable and preexisting spines less stable. The authors interpret their results as an indication that during learning there is indeed a ‘rewiring’ of the network and not just addition of new synapses.

The two papers were reviewed by Noam E. Ziv & Ehud Ahissar in the News and Views section. Here they raise the issue that, if such a small number of spines are to account for the formation of stable memories, then what are the consequences of the loss of a somewhat larger number of spines on the neuronal network?

For someone like me that more than once as an undergraduate used a microscope fitted with a concave mirror to use the sunlight to illuminate the specimen, the ability to monitor individual synaptic structures over time in a living organism can only be described as awesome. But, as pointed out by Ziv and Ahissar,

“[…] although it remains to be shown conclusively that these forms of spine remodeling are essential components of long-term learning and not merely distant echoes of other, yet to be discovered processes, these exciting studies make a convincing case for a structural basis to skill learning and reopen the field for new theories of memory formation.”

References:
Yang, G., Pan, F., & Gan, W. (2009). Stably maintained dendritic spines are associated with lifelong memories Nature, 462 (7275), 920-924 DOI: 10.1038/nature08577
Xu, T., Yu, X., Perlik, A., Tobin, W., Zweig, J., Tennant, K., Jones, T., & Zuo, Y. (2009). Rapid formation and selective stabilization of synapses for enduring motor memories Nature, 462 (7275), 915-919 DOI: 10.1038/nature08389
Ziv, N., & Ahissar, E. (2009). Neuroscience: New tricks and old spines Nature, 462 (7275), 859-861 DOI: 10.1038/462859a

Getting the timing right for song control

Posted in Science by kubke on December 11, 2009

ResearchBlogging.orgSongbirds have evolved special areas in the brain that are used for song learning and song production. Two types of output connections from a cortical area known as HVC (proper name) each go to two ‘separate’ pathways. Some HVC neurons connect directly with neurons in a brain area called RA (robust nucleus of the archopallium), which in turn connects with the motoneurons that control the muscles in the vocal control organ (syrinx). Another set of HVC neurons connect through what is called the anterior forebrain pathway, a collection of cortical, thalamic and basal ganglia nuclei that are important for birds to learn their song. The two pathways talk to each other through a nucleus called LMAN that sends a direct input to RA.

Vocal circuit, by Kubke

The anterior forebrain pathway sends an error signal through the connections from LMAN to RA to ultimately control the motoneurons in nXIIts to produce the desired song structure. What is puzzling about the circuit is how the precise timing for this to operate efficiently is achieved. Because it takes time for the action potential to travel down the axon, and because it takes time for information to travel through synapses, the anterior forebrain pathway roundabout way (HVC-to-X-to-DLM-to-LMAN-to-RA) should be much slower than the speed of travel of information from HVC to RA. And this is precisely what Arthur Leblois, Agnes Bodor, Abigail Person and David Perkel examined.

To determine this, they electrically stimulated HVC and recorded from area X, DLM and LMAN, and were able to explore the mechanisms by which information travels around the anterior forebrain pathway as well as how long it takes to get from one point to another (latency).

How is transmission routed along the anterior forebrain pathway?

What they found is that low intensity stimulation from HVC produces excitation of area X neurons, but that higher intensity stimulation also produces a rapid inhibitory input from local area X circuits. One of the effects of this early inhibition is a lengthening of the time interval between consecutive action potentials in the neurons in area X that project to DLM (pallidal neurons).

DLM is normally inhibited by pallidal neurons in area X. But if the time interval between action potentials in the pallidal neurons is increased, it releases the ‘veto’ signal on DLM neurons which can then fire action potentials (either in response to other excitatory inputs or as a result of ‘post inhibitory rebound’). Based on the results, DLM neurons will therefore become activated (and in turn activate LMAN) when the local inhibition in area X (in this case triggered by HVC stimulation at high intensity) lengthens the time period between action potentials in the pallidal neurons. This is consistent with the observation that responses in LMAN could only be elicited by high levels of stimulation in HVC.

Zebra Finch (male) by Peripitus (GNU documentation licence v1.2)

In this way, an input from HVC sufficient to elicit fast inhibition in area X, removes the veto signal on neurons in DLM, which are then able to discharge and excite LMAN, which can then send the appropriate signals to RA.

Does the timing work?

The short answer is yes. First, the authors showed that although the path-length between HVC-Area X and that of Area X-DLM, are similar the conduction times are much smaller in the latter. This, they suggest, is achieved both by an increase in diameter of the axons projecting from AreaX to DLM, axons which are myelinated even within DLM. The population latency in DLM and LMAN following HVC stimulation is very similar, but the authors argue that perhaps the population of DLM neurons that have the shortest latencies that are the ones that are playing the key role.

The specialisations in axonal morphology and myelination of the pallidal neurons may be an evolutionary adaptation that contributes to a short latency pathway that can modulate fine temporal features of song production.

Citation:

Leblois, A., Bodor, A., Person, A., & Perkel, D. (2009). Millisecond Timescale Disinhibition Mediates Fast Information Transmission through an Avian Basal Ganglia Loop Journal of Neuroscience, 29 (49), 15420-15433 DOI: 10.1523/JNEUROSCI.3060-09.2009

Hey, Calcium, show me the way!

Posted in Science by kubke on December 4, 2009

ResearchBlogging.orgMost (if not all) questions about neuroscience can be answered with <blah blah blah> Calcium (or so it was rumoured at the Neural Systems and Behaviour Course in the MBL back in the ‘90s). Humour aside, there is some truth to the statement, and Sheng Wang, Luis Polo-Parada and Lynn Landmesser examined the role of calcium changes in developing motoneurons.

Their work looked at how calcium changes may be associated with the process through which neurons in the spinal cord find their target muscles, and they did so in a well known system, one that Lynn Landmesser has dedicated most of her career to. The neurons in the spinal cord at the lumbosacral level are organized in longitudinal columns that span several vertebral segments. Neurons in each column will connect with a very specific leg muscle. This means that neurons at different spinal levels, but innervating the same muscle, will have their axons come out through different spinal nerves. All of the axons from different nerves come together at the plexus at the base of the limb where they sort out; axons that will connect with the same muscle become clustered. This has become a wonderful system in which to study how neurons know ‘who’s who’, and make sure they just ‘stick with their own kind’, an important process that avoids incorrect innervation patterns during development.

Also during development, the motoneurons become electrically active, producing burst of rhythmic electrical activity. The patterns of activity are characteristic of each motoneuron pool (that is, the group of motoneurons innervating an individual muscle), and changing the normal rhythm produces errors in axon guidance. Because calcium is known to be involved in many cellular responses, and because electrical activity can increase the levels of calcium inside the cell, the group looked at how calcium in the cell was changing during the bursts of electrical activity.

They found that the electrical rhythmic activity produced calcium transients in early developing motoneurons, even in some that were still migrating towards their final position in the spinal cord. All motoneurons were initially quite synchronous with respect to the calcium changes, but the duration of the calcium transients was different in different motoneuron pools. These differences in duration in the calcium transient could contribute to the downstream signaling that leads to the identity-specific behavior of the axons in the periphery.

One interesting finding is that blocking non alpha-7 nicotinic receptors blocked the spontaneous bursting but did not prevent calcium transients from happening under electrical stimulation. Further, although these channels underlie the bursting activity under normal conditions, the calcium transients were able to propagate across motoneurons while the channels were still blocked. This suggests that although these receptors may normally be involved in the production of electrical bursts,  other neurotransmitter systems may be able to operate to allow the propagation of calcium transients.

As the authors suggest, the next step will be to see whether the difference in the duration of calcium transients in different motoneuron pools are sufficient to produce the phenotypic differences that provide each motoneuron with its ability to recognize its ‘own kind’ and find their way to the correct target.

Wang, S., Polo-Parada, L., & Landmesser, L. (2009). Characterization of Rhythmic Ca2+ Transients in Early Embryonic Chick Motoneurons: Ca2+ Sources and Effects of Altered Activation of Transmitter Receptors Journal of Neuroscience, 29 (48), 15232-15244 DOI: 10.1523/JNEUROSCI.3809-09.2009

Disclaimer: Lynn Landmesser was my PhD supervisor.