Building Blogs of Science

Getting up to speed with sound localisation

Posted in Science by kubke on February 25, 2010

ResearchBlogging.orgFunny how we are really good, for the most part, at knowing where sounds are coming from. And it is funny since the ear provides the brain with no direct information about the actual relationship in space of different sound sources. Instead, the brain makes use of what happens to the sound as it reaches both ears by virtue of, well, being a sound wave and that we have two ears separated in space.

Imagine a sound coming from the front, the sound will arrive to the two ears at the same time. But if it is coming from the right it will arrive to the right ear first, and to the left ear a wee later. This ‘time difference‘ will depend on the speed of sound in air and how far apart our ears are. Even more, as the sound source moves from the far right to the front of the head those time differences will become smaller and smaller, until they are zero at the front. If one could put one microphone in each ear, one could reliably predict where the sound comes from by measuring that time difference. And this is exactly what a group of neurons in the brain does.

Easy enough? Not quite.

The way the brain works is that things on the left side of our body are mapped on the right side of our brains, and things on the right side of our bodies are mapped on the left side of our brains. So the ‘time comparison’ neurons on the right side of the brain deal mainly with sound from coming from the left (and neurons dealing with the sound from the right are on the left side of the brain). But to do the time comparison these neurons need to get the information from both ears, not just from only one side!

Figure 1 (by Kubke CC-BY)

This raises this conundrum: the neural path that the information from the left ear needs to travel to get to the same (left) side of the brain will inevitably be shorter than the path travelled by information coming from the other side of the head. So how does the brain overcome this mis-match?

And here is where having paid attention at school during the  “two trains travelling at the same speed leave two different stations blah blah blah” math problem finally pays off. When a sound comes from the front, the information arrives to each of the ears at the same time. The information also arrives to the first station in the brain (nucleus magnocellularis) at the same time. But time comparison neurons need information from both ears, and the path that the information needs to travel from the right side to the time comparison neurons in nucleus laminaris on the left side (red arrow in figure 1) is longer than the path from the same side (blue arrow in figure 1).

However, when you look into an actual brain, things are not so straight-forward (sorry for the pun). The axons from nucleus magnocellularis that go to the time comparison neurons on the same side of the brain take a rather roundabout route (as in figure 2). And for long we assumed that such roundabout way was enough to make signals from the left and right sides to arrive at about the same time.

Figure 2 (by Kubke CC-BY)

Easy enough? Not quite

When Seidl, Rubel and Harris actually measured the length of the axons (red and blue) they found that there was no way that the information could arrive at about the same time and that the system could not work in the biological range. But this problem could be overcome (back to the old school problem) by having the two trains (action potentials rather) travel at different speeds. And this is something that neurons in the brain can relatively easily do in two ways: One is to change the girth or diameter of the axon. The other is to regulate how they are myelinated. Myelin forms a discontinuous insulating wrap around the axon, which is interrupted at what is called the Nodes of Ranvier. The closer the Nodes of Ranvier are, the slower the action potential travels down the axon.

What the group found was that both axon diameter and myelination pattern were different in the direct (blue) and crossed (red) axons. When they now calculated how long it would take for the action potential from both sides to reach the time comparison neurons in nucleus laminaris, adjusting speed for the differences in the two axons, they found that yup, that pretty much solved the problem.

Easy enough? Quite

Like the authors say:

The regulation of these axonal parameters within individual axons seems quite remarkable from a cell biological point of view, but it is not unprecedented.

But remarkable indeed, considering that this regulation needs to adjust to a very high degree of temporal precision. I have always used the train analogy when I lecture about sound localisation, and always assumed equal speed on both sides. Seidl, Rubel and Harris’ work means I will have to redo my slides to incorporate differences in speed. Hope my students don’t end up hating me!


Seidl, A., Rubel, E., & Harris, D. (2010). Mechanisms for Adjusting Interaural Time Differences to Achieve Binaural Coincidence Detection Journal of Neuroscience, 30 (1), 70-80 DOI: 10.1523/JNEUROSCI.3464-09.2010

Ed Rubel: The 21st century, a new era for hearing habilitation

Posted in Health and Medicine, Science, Science and Society by kubke on February 10, 2010

It wasn’t easy to get Professor Ed Rubel down to New Zealand due to his busy schedule, but finally, and thanks to support from the University of Auckland School of Medical Sciences, we did.

Yesterday, Prof Rubel delivered a public talk at the Med School: “The 21st Century: A New Era for Hearing Habilitation”.

Ed Rubel has a long trajectory, and has contributed to many aspects of neuroscience, ranging from how brains are put together in the embryo, how and when auditory processing is set up, how the way that neurons are connected determine how information is coded, and much more.

But among all his contributions one stands out: the discovery that the sensory hair cells in the inner ear of birds can regenerate after damage. His team found this, as he describes, serendipitously in the mid 1980’s. In mammals, once the sensory hair cells are damaged due to noise exposure or chemical toxicity (like exposure to certain antibiotics), the cells are not replaced, and as a result, the hearing loss is permanent. Thus, the question is: what is different between birds and mammals that allows one, but not the other to repair their damaged ears?

This answer has eluded us since then. There are basically two possibilities: One, that damage induces the mechanisms of repair in birds, but not mammals. The other is that the repair mechanisms are inhibited in mammals (and that damage removes this inhibition in birds). In order to get to the bottom of this, one would need to understand what are the cellular mechanisms that are either inhibiting or inducing hair cell replacement.

Zebrafish labelled lateral line, from Owens et al PLoS Genetics 2008

And here is where the Rubel group in Seattle came up with a rather clever solution: Let’s look in the zebrafish. One reason to do this is that zebrafish, like many other fishes, have hair cells on the lateral line on the surface of the body in structures called neuromasts. And like in birds, fishes are able to replace these cells.There are two advantages to the zebrafish approach. First, because the neuromasts are on the surface of the body it is possible to load the sensory hair cells and support cells with fluorescent molecules and monitor what is happening over time with different treatments. Second, the genetics of zebrafish are well-known, which facilitates the identification and manipulation of genes to see what their effects are on the ability to regenerate those cells.

Zebra fish neuromast, From Owens et al PLoS Genetics, 2008

Ed Rubel teamed up with David Raible’s group, and examined genes that may be involved in different susceptibility to induced hair cell death by neomycin, as well as what drugs that may confer protection to the hair cells. Their work was published in PLoS Genetics (doi:10.1371/journal.pgen.1000020) and you can go and read it thanks to the magic of Open Access.

Their ultimate goal is to use this approach to screen for genes and pharmaceutical compounds that protect hair cells from damage in the zebrafish and, once identified, determine whether their findings apply to mammals as well. As the authors state in their summary:

Variation in the genetic makeup between individuals plays a major role in establishing differences in susceptibility to environmental agents that damage the inner ear. […] The combination of chemical screening with traditional genetic approaches offers a new strategy for identifying drugs and drug targets to attenuate hearing and balance disorders.

You can also read more about the project in a feature by Shirley S Wang in the Wall Street Journal.

About Professor Ed Rubel: Professor Rubel is the Virginia Merrill Bloedel Professor of Hearing Science at the University of Washington. He was the founding Director of the Virginia Merrill Bloedel Hearing Research Center and is currently the Scientific Director. Professor Rubel’s work is geared towards understanding the development, plasticity, pathology and potential repair of the inner ear and auditory pathways in the brain. His work throughout the years has focused on the cellular processes underlying the development of the auditory system and how these processes are influenced by early experience.

Synapse #fail, Science #win

Posted in Science by kubke on November 25, 2009

ResearchBlogging.orgThe endbulb or calyx of Held is a very large synapse found in the auditory system. It consists of a very large ‘calyceal’ ending, literally wrapping around the cell body of the postsynaptic neuron. It was first described by H Held in the late 1800’s and has since been shown to characteristically be present in neuronal circuits that require very high temporal precision. (It is, by the way, my favourite synapse.)

Because the synapse is so large, there are numerous sites of contact where the neurotransmitters are released, which will happen whenever an action potential reaches the synaptic terminal. Because of this, it has always been thought that these synapses never fail to produce a response (action potential) on its target (postsynaptic) neuron, that is, that it is a fail-safe synapse: every time that there is neurotransmitter release, the postsynaptic neuron produces an action potential.

Endbulb of Held

Barn owl endbulb of Held (by Kubke)

But is this true?

Jeannette Lorteije, Silviu Rusu, Christopher Kushmerick and Gerard Borst examined precisely this, and they did so in a series of really elegant experiments in mice. They examined whether the discrepancies in the data regarding the degree of reliability at the enbulb or calyx of Held could be attributed to different methodological approaches or differences in the interpretation of the raw data. To examine this they did a series of recordings from cells in the Medial Nucleus of the Trapezoid Body (MNTB), which is part of the mammalian auditory system. The authors conclude that that there is a significant incidence of failures of transmission at this level of the system.

This is in contrast with the results reported by Bernard Englitz, Santra Tolnai, Marey Typlt, Jürgen Jost and Rüdolf Rübsamen. Here the authors recorded the failure at the endbulb of Held in the auditory cochlear nucleus AVCN and the calyx of Held in the MNTB in mongolian gerbils. They report that although failures of transmission were often found in AVCN, this was not the case in MNTB.

Synaptic structures analogous to the endbulb or calyx of Held are found in neuronal circuits that require high temporal precision. In the auditory system high temporal resolution is necessary for the measurement of interaural time differences, which in mammals are used to localize low frequency sound in the horizontal plane. Benedikt Grothe has argued that low frequency hearing appeared later in mammalian evolution, and that anatomical differences in a nucleus that receives inputs from the MNTB and is involved in the detection of interaural time differences (MSO) reflect this evolution. He argues that although MSO may have evolved to detect ITDs in low frequency hearing mammals (such as gerbils), its function may be different in higher frequency hearing mammals. On therefore wonders whether the differences in the data between the two studies may be related to adaptations associated with different temporal processing requirements in mammals with different frequency hearing ranges.

What did Lorteije and collaborators do?

In order to decide whether there are times in which synaptic release fails to elicit an action potential on the target cell, one needs to simultaneously monitor the activity happening at the synapse as well as at the postsynaptic neuron. There are traditionally two ways of doing this: One is to record the currents near the synapse that are produced by the electrical activity of the synapse and the cell, and the endbulbs of Held are large enough to produce sufficient current that can be detected. The other is to actually record the activity simultaneously from the cell and the synaptic terminal, which is usually done in an ‘in vitro’ preparation.

Lorteije and colleagues produced a set of data that is simply amazing, and their findings explain many of the discrepancies that can be found in the literature. They answered some very straightforward questions:

  1. Are the extracellular recordings done in vivo representative of what is actually going at a single endbulb-neuron contact? (the answer is yes)
  2. Is there synaptic  release that fails to produce an action potential in the postsynaptic neuron? (the answer is also yes)
  3. Is the short term synaptic depression seen in vitro also seen in the whole animal (in vivo)? (Short term depression is a reduction in the effect of synaptic release on the postsynaptic cell.). (The answer is basically no)

The authors recorded from cells in the Medial Nucleus of the Trapezoid Body (MNTB), which receives inputs in the form of the large calyces of Held and is involved in auditory processing. They did this by recording the spontaneous and auditory-evoked activity extracellularly (as most people do) as well as directly from the cells with a patch pipette in anaesthetized mice. They then repeated these experiments in vitro, this time simultaneously recording extracellularly and in whole cell patch, which allowed them to confirm that the extracellular recordings in vivo did indeed represent the activities of the terminal and the cell and that it could also provide information as to the size of the synaptic potential. Their results have two important findings:

  1. in vivo there is no observable short term synaptic depression. The synaptic depression observed in vitro may be partly due to the concentration of Calcium in the bathing solution, but other factors may be involved.
  2. They also found that the release of neurotransmitter at the synapse often failed to produce an action potential in the postsynaptic cell. A similar rate of failure to that observed in vivo can be obtained in vitro by lowering the calcium concentration of the bathing solution.

The authors summarize their findings by saying:

“Due to its low release probability and large number of release sites, its average output can be kept constant, regardless of firing frequency. Its low quantal output thus allows it to be a tonic synapse, but the price it pays is an increase in jitter and synaptic latency and occasional postsynaptic failures.”

This is a carefully designed study, and despite my concerns as to whether their results are generalizable to other mammals, they do provide data that will be welcome by many auditory neurophysiologists. Their ability to record from a patch in vivo is no small feat, and the correlation between intracellular and extracellular data is extremely useful. Further, there is a cautionary tale around the way that data obtained from in vitro data can be interpreted.

And if you think this post is long, try reading the paper! (There are heaps more gems in there.)

References

Lorteije, J., Rusu, S., Kushmerick, C., & Borst, J. (2009). Reliability and Precision of the Mouse Calyx of Held Synapse Journal of Neuroscience, 29 (44), 13770-13784 DOI: 10.1523/JNEUROSCI.3285-09.2009
Englitz, B., Tolnai, S., Typlt, M., Jost, J., & Rübsamen, R. (2009). Reliability of Synaptic Transmission at the Synapses of Held In Vivo under Acoustic Stimulation PLoS ONE, 4 (10) DOI: 10.1371/journal.pone.0007014
Grothe, B. (2000). The evolution of temporal processing in the medial superior olive, an auditory brainstem structure Progress in Neurobiology, 61 (6), 581-610 DOI: 10.1016/S0301-0082(99)00068-4

How is human noise affecting the environment?

Posted in Environment and Ecology, Science, Science and Society by kubke on October 29, 2009

There is no question that human activities create noise pollution, and that we humans find some of these noises rather stressful: There is nothing like a quiet afternoon in Snell’s Beach being interrupted by the blazing noise of motor boats in the water. But how this noise affect other animals is the issue brought up in a recent review by JR Barber, KR Crooks and KM Fristrup in Trends in Ecology and Evolution.

The presence of noise reduces our perception of sounds (including those that are biologically relevant) a phenomenon that is known as auditory masking. Masking reduces our ability not only to detect, for example, communication signals, but also other sounds that may be important, such as an approaching predator. It is known that animals can change some aspects of their

Lyrebird Menura novaehollandiae (by Attis)

communication signals to overcome the effects of masking. For example, some birds in urban areas sing at a higher pitch than their counterparts in rural environments. Lyrebirds even incorporate some of these human generated sounds into their song (and this is beautifully shown in a Attenborough’s video that can be found here).

But we cannot only blame humans for loud environmental noise. Peter Narins, for example, describes the sounds in his field site in China as “so loud that you cannot hear yourself thinking”. And it was in this loud environment that he discovered that some local frogs shifted their communication signals to the ultrasound, probably to avoid the effects of auditory masking from the natural environment.

So the question is: is anthropogenic noise detrimental to animal species?

The answer appears not to be so simple. On the one hand, while masking may have a negative effect on vocal communication, if a predator is using those same communication signals to locate its prey, masking may lower the chances of being detected (and eaten!). The authors also argue that one of the problems of determining the impact of loud noises in the ecology of species is that anthropogenic sounds do not come in isolation: they come with us, humans, as well as everything we bring with us, including habitat fragmentation.

After reviewing the literature, the authors state that:

Taken individually, many of the papers cited here offer suggestive but inconclusive evidence that masking is substantially altering many ecosystems. Taken collectively, the preponderance of evidence argues for immediate action to manage noise in protected natural areas.

Something to think about as we look forward to the warm days of summer while oiling our beloved motorboats and motorcycles.

Barber JR, Crooks KR and Fristrup KM (2009) The costs of chronic noise exposure for terrestrial organisms. Trends in Ecology and Evolution doi:10.1016/j.tree.2009.08.002

Deaf Awareness Week

Posted in Health and Medicine, Science and Society by kubke on September 22, 2009

Deaf Awareness Week runs from the 21st to the 27th of September this year. Deaf Awareness Week has been running since 2004, and this year’s theme is ‘Tender Ears’, focusing on the risks that can lead to hearing loss in childhood and during life. (more…)