A few days ago I got an email from a colleague of mine pointing me to a video about birds of paradise. I am happy I went and looked at it because it is quite amazing. There is no question why this group of birds stand apart from others – they are not beautiful to watch, but their behaviour, too, is quite amazing. Watch:
There are other birds that I find absolutely amazing. The Lyrebird for example, incorporates into its song sounds that it hears as it goes about life. There are two types of song learning birds (songbirds). Some will learn to imitate a song from an adult tutor as they are growing up, and pretty much sing that song as adults. Others can continue to incorporate elements to their song as adults. The lyrebird falls into this last group. But what I find amazing about the lyrebird is not that it incorporates new song elements, but that some of those sounds are not “natural” sounds. Watch:
Another amazing bird is the New Caledonian crow. A while back Gavin Hunt (now at the University of Auckland) came to find out that these birds were able to manufacture tools in the wild. They modify leaves and twigs from local plants to make different types of tools which they then use to get food. This finding spurred a large body of work on bird intelligence. Watch:
And if you are interested of where these wonderful animals all came from, there is a fantastic blog by Ed Yong over at national Geographic. Read:
Imagine you drive into a motel in Gatlinburg TN, and see behind an open room door 2 guys setting up cameras pointing at the beds while two young women peek from the parking lot. Well, if it was in the mid ’90’s it might have been Drs Moiseff and Copeland setting up the equipment before venturing into Elkmont in the Smoky Mountains to study the local fireflies. (And one of the two women would have been me.)
Andy Moiseff and Jon Copeland started studying the population of fireflies in the Smoky Mountains National Park after learning from Lynn Faust, who had grown up in the area, that they produced their flashes in a synchronous pattern.
In the species they are studying (Photinus carolinus) the males produce a series or bursts of rhythmic flashes that are followed by a ‘quiet period’. But what is particularly interesting about this species is that nearby males do this in synchrony with each other. If you stand in the dark forest, what you see is groups of lightning bugs beating their lights together in the dark night pumping light into the forest in one of nature’s most beautiful displays.
Females flash in a slightly different manner and, as far as I know, they don’t do it synchronously either with other females nor with the males. One interesting thing in Elkmont is that there are several species of fireflies, and you can pretty much tell them apart by their flashing patterns. But as useful as this is for us biologists (since it avoids having to go through extensive testing for species determination), the question still remained of whether the flashing patterns played a biological role.
And this is what Moiseff and Copeland addressed in their latest study published in Science. They put females in a room where LEDs controlled by a computer simulated individual male fireflies. The LEDs were made to flash with different degrees of synchronisation and they looked at the responses of the females. They found that while the females responded to synchronous flashes of the LEDs, they really didn’t seem to respond when the flashes were not synchronous. Even more, they responded better to many LEDs but not much to a single one. What this means, is that if you are a male of Photinus carolinus, you better play nice with your mates if you want to get the girl.
What *I* want to know is how this behaviour is wired in the brain. At first hand, this seems like a rather complex behaviour, but in essence all that it seems to require is a series of if/then computations, which should not be too hard to build (at least not from an ‘electronic circuit’ point of view). But Bjoern Brembs reminded me of a basic concept in neuroscience: brains are evolved circuits, not engineered circuits. So, Andy and Jon, how *do* they do it?
Original article: Moiseff, A., & Copeland, J. (2010). Firefly Synchrony: A Behavioral Strategy to Minimize Visual Clutter Science, 329 (5988), 181-181 DOI: 10.1126/science.1190421
More Sunday sharing thanks to the people in the internet and Open Access …
ASCILITE is over, but it left me with a lot of work to do because of the great sessions in the conference. You can get a lot of the information covered there thanks to Grainne Conole on Cloudworks. I also posted some interesting resources on my FriendFeed page.
We’ve come a long way baby
The Wikipedia entry for Brain-Computer Interfaces, describes a prototype done in 1978. It was successful in having a man blinded as an adult perceive the sensation of light. But (continue reading), its operation required being “hooked up to a two-ton mainframe”. Well, things have changed, and a recent article by Frank H. Guenther, Jonathan S. Brumberg, E. Joseph Wright, Alfonso Nieto-Castanon, Jason A. Tourville, Mikhail Panko, Robert Law, Steven A. Siebert, Jess L. Bartels, Dinal S. Andreasen, Princewill Ehirim, Hui Mao, Philip R. Kennedy published in PLoS One talks about a wireless brain-machine interface that could be used to produce synthetic speech for individuals with speech impairments. You can read the article here, and Brandon Keim has a great take on it on Wired Science.
Great stories online
- Scientific American explains why egg laying mammals exist
- National Geographic has a list of the top 10 videos of 2009 (my favourite is the Whale Fossil Found in Kitchen Counter)
- Daniel Hawes from Ingenious Monkey talks about parasites in the brain, and
- Ed Yong from “not exactly rocket science’ has a great post on how we can use memory recall to reshape fearful memories.
My favourite tweet has to be one by @Mark_Changizi read Ed Yong’s post and you will know why it made me laugh so much!
Oh, and congratulations to NeuroDojo for being named “blog of note”
Tweeting my own horn
I was contacted by Jose Barbosa from 95bFM’s Sunday Breakfast, and we got to chatting about brains. You can find the recording of the radio segment here. And thanks to Jose, who found the link to Jeremy Corfield’s thesis on the kiwi brain.
Barn owls are the subject of many studies on auditory neuroscience because of their exquisite ability to localize sound. The auditory system is interesting from a neuronal computation point of view because the inner ear, where sounds are detected, relays no information to the brain as to the location of the sound source in space. It is then up to the neurons in the brain to extract other information transmitted from the ear to build an auditory space map which can be used for sound localization. The basic model is that the auditory system does this by comparing the differences in intensities of the sound at the two ears (interaural level differences) and the differences in the time of arrival of the sound to each of the ears (interaural time differences), and that this information is sufficient to sound localize.
But the sound that reaches the tympanum is not an exact replica of the sound emanating from the source, to a great extent due to the way that sound interacts with and is modified by the animal’s own head structures (for example, the external ear or pinna). These changes in the structure of the sound are described in the Head Related Transfer Function (HRTF). Because we are not identical clones of each other, each one of us has a slightly different head related transfer function.
In a recent study published in PLoS One, Laura Hausmann, Mark von Campenhausen, Frank Endler, Martin Singheiser, and Hermann Wagner examined whether the contribution of the facial ruff to the barn owl’s HRTF affected the owl’s ability to sound localize. They recorded the HRTF of each of the owls in the study, as well as the HRTF of a barn owl in which the facial ruff had been cut off. They used these functions to build a virtual auditory stimulus that simulated the presence or absence of the ruff, and assessed the ability of the barn owls to sound localize. (They can do this behaviourally, because barn owls turn their heads to the source of the stimulus with great precision.)
Their results show that the facial ruff contributes to the cues used for sound localization by increasing the effective interaural time difference range that barn owls use to localise sound in the horizon and by contributing to the differences in interaural level differences that barn owls use to localize sound in elevation. But they also showed that there were no differences in the localization ability when when barn owls used interaural time differences whether the virtual acoustic stimulus was built with their own HRTF or one of another owl. This was not true when the owls were using interaural level differences, where the owls did not localize equally to the different HRTFs. Removal of the ruff had, as expected, several effects on sound localisation, one being that they lost the ability to discriminate between sounds coming from the front or from the back.
Barn owls learn how to associate interaural time and level differences with the location of the sound source during their first two months of life, when their head is growing and the facial ruff develops. They do this by instructive signals derived from the visual system, by which they attribute specific combinations of interaural time and level differences to particular sound source locations, and that leads to the development of a ‘space map’ that is custom built for each particular owl.
- Disclaimer: Hermann Wagner is a former collaborator of mine.
Hausmann, L., von Campenhausen, M., Endler, F., Singheiser, M., & Wagner, H. (2009). Improvements of Sound Localization Abilities by the Facial Ruff of the Barn Owl (Tyto alba) as Demonstrated by VirtualRuff Removal PLoS ONE, 4 (11) DOI: 10.1371/journal.pone.0007721
I just learned through “The Complete Guide to GoogleWave” (HT @BoraZ) that Google Wave takes its name after the way that characters in the TV series Firefly/Serenity communicated with each other through ‘waves’. Which reminded me of the fireflies in Elkmont in the Smoky Mountains National Park.
When I was a post-doc in the US, I would get on my car once a year to become a bit of a field hand but mostly a nuisance in a research project led by Andy Moiseff and Jon Copeland in the Smoky Mountains in Tennessee. (You can find one of the first news coverage of the story here).
Jon had been studying synchrony in the flashing of fireflies in Malaysia, because it was thought that it was only there where synchrony occurred. But a local from the Smokys (Lynn Faust) alerted him to a similar behaviour up in the park (and much closer to Jon’s hometown in Georgia).
Fireflies communicate using their light flashes, and in the Smokey’s they show synchronicity.
A single male will produce a series of flashes in a sequence. When a male starts flashing, other males around it will start their own flashing sequence, in synchrony with the leader. The males around the second set of males then start doing the same, and so on, and what you see (if you are paying enough attention) is a sort of Mexican wave of fireflies flashing in synchrony. It is one of the most amazing light shows that nature has put together, and if you are ever in the area in the summer, I recommend visiting the park visitor centre and asking for a good viewing spot.
When I hear people use the word Firefly I immediately think of the Smoky Mountains (which usually brings me a couple of notches down the geek scale, since they mostly mean the TV show). I am sure I will be reminded of the summers in the Smokey’s every time I go into Google Wave.