Building Blogs of Science

Digital skills and scholarship for researchers 5 – getting funded

Posted in Environment and Ecology, Health and Medicine, Science, Science and Society by kubke on September 20, 2016

Almost a year had past since that first conversation between Kaitlin Thaney, Nick Jones, Cameron McLean and myself where we asked:

‘what would Software Carpentry look like if it was delivered as a university course?’

A number of conversations and workshops were had that kept indicating that the thirst and need for this was there, that there wasn’t a clear solution in place, and that the solution was not going to be easy to produce. We knew what we wanted the house to look like, but we needed to find an architect. And of course, money to pay them.

Enter Nat Torkington

https://www.flickr.com/photos/titine/4363695110/sizes/o/

Card sorting at Foo Camp CC-BY-NC-SA Titine on Flickr

Nat organises an unconference called KiwiFoo. He invites a bunch of people to a retreat north of Auckland and lets the awesome happen. In 2015 I was invited, and, by pure luck Kaitlin Thaney was invited too as she was around that time in Australia for a software carpentry instructor training around ResBaz Melbourne. Also invited were Nick Jones, director of NeSi which had recently become the New Zealand institutional partner of Software Carpentry, and John Hosking, Dean of the Faculty of Science, University of Auckland.

The words that Kaitlin Thaney said at one of our meetings came back as if from a loudspeaker: You need to engage with the University Leadership. You need to think strategically.

And KiwiFoo gave us that opportunity.

Kaitlin, Nick and I brought John Hosking into the conversation, and his response was positive. We tried to exploit the convergence as much as we could over that weekend – there are not that many chances to get to sit with this group of people in a relaxed environment and without interruptions or the need to run to another meeting. We had each other’s full attention. And exploit we did.

Back in Auckland, Nick suggested that I talk about the project to the Centre of eResearch Advisory Board. The Centre of eResearch at the University of Auckland is helping researchers with exactly these kinds of issues. Next thing I know, Cameron McLean and I are trying to get everything we learned through the workshops into something more concrete. I talked to those details, and when the Board asked: ‘how can we help you’ I did not know what to say.

Dang.

Luckily, Nick Jones, as usual came to the rescue. We had a chat, and decided to work with me on higher level thinking. I was still missing the big picture that we could offer the leadership. Watching Nick’s thinking process was a humbling joy. I think I learned more from that session than what I did in all the Leadership programmes I was part of. I realised also how far I was from getting to where we needed to get. What is the long term vision? What are the gaps? Why do we need to fill them? How are you going to manage change?

At this meeting we saw we needed to engage with CLeaR, the organisation that provides Professional Development for staff and the group has a lot to offer in instructional design. We had already agreed that this training project should not be focused solely on students, but, rather, should have a broader scope. We produced an initial outline of what we were proposing, and invited Adam Blake from CLeaR to join the conversation and contribute to this document.

I was invited again to the eResearch Advisory Board, and this time I was better prepared. The timing was also perfect. The application window for the Vice Chancellor’s Strategic Development fund was open and I now knew what I needed: support to put an application through. We built a team of key project advisors, each who could contribute something quite specific: Adam Blake, to advise on course structure and to provide support to do the research on the course, Mark Gahegan, Director of the Centre for eResearch, Poul Nielsen, from the Auckland Bioengineering Institute, Nick Jones, from NeSI, and myself as the Project Lead, and the intention of hiring Cameron McLean as project manager. We worked on the application and backed, by the eResearch Advisory Board, it went in.

Our proposal was to develop a training suite, based on Software and Data Carpentry that could be used to be delivered to students and staff in different formats, to support a ResBaz in Auckland in February 2016, and to run a pilot course for students about to enter the research lab on second semester in 2016. We knew our bottleneck was time – people’s time to do the work. We asked for $150,000 in salaries.

In September we got the email: your application has been approved….

But…

The Vice Chancellor’s fund was giving us initially a limited amount of money with the rest of the money contingent on the approval of a needs analysis by the eResearch Advisory Board.

We accepted the offer and hired Cameron McLean as Project Manager (by now he was a trained Software Carpentry Instructor and had submitted his PhD thesis and was waiting for his viva). First order of business, a needs analysis.

Time to go to the library.

That pesky BRAIN

Posted in Health and Medicine, Science by kubke on May 4, 2013

When a President annouces a scientific project as publicly as President Obama did, the world listens. The US is planning to put signifcant resources behind a huge effort to try to map the brain. There has been a lot said about this BRAIN project [1], and I have been quietly reading trying to make sense of the disparate reactions that this ‘launch’ had – and trying to escape the hype.

Sir Charles Bell (1774-1842). CC-BY-NC Wellcome Library, London

I can understand the appeal – the brain is a fascinating invention of nature. I fell in love with its mysteries as an undergraudate in Argentina and I continue to be fascinated by every new finding. What fascinates me about the discipline is that, unlike trying to understand the kidney for example, neuroscience consists of the brain trying to understand itself . That we can even ask the right questions, let alone design and perform the experiments to answer them is what gets me out of bed in the morning.

Trying to understand the brain is definitely not a 21st Century thing.  For centuries we have been asking what makes animals behave the way they do.  And yet we still don’t really know what it is about our brains that makes us the only species able to ask the right questions, and design and perform the experiments to answer them?

Many of us neuroscientists might agree that how we think about the brain came about from  two major sets of finding. Towards the end of the 19th Centrury it finally became accepted that the brain, like other parts of the body, was made up of cells. It was Santiago Ramon y Cajal’s tireless work (with the invaluable assistance of his brother Pedro) that was fundamental in this shift. This meant that we could apply the knowledge of cell biology to the brain. The second game changer was the demonstration that neurons could actively produce electric signals. In doing so, Hodgkin and Huxley beautifully put to rest the old argument between Volta and Galvani. This meant we had a grip on how information was coded in the brain.

CC-BY kubke

From this pioneering work, neuroscience evolved directing most of its attention to the neurons and their electrical activity. After all, that is where the key to understanding the brain was supposed to be found. Most of what happened over the twentieth century was based on this premise. Neurons are units that integrate inputs and put together an adequate output passing the information to another neuron or set of neurons down the line until you get to the end. In a way, this view of the brain is not too different from a wiring diagram of an electronic circuit.

Trying to understand the wiring of the brain, however, is, not easy. There are thousands and thousands of neurons each with a multitude of inputs and outputs. You can quickly run out of ink trying to draw the wiring diagram, It is because of this complexity that neuroscientists (just like scientists in many other disciplines) turn to simpler models. We have come to know some secrets about learning from studying the sea slug Aplysia, about how the brain gets put together from flies and frogs, and even about how neurons are born in adult brains from singing canaries. What all these models have in common is that we can tie pretty well a very specific aspect of brain function to a circuit we can define rather well. And we have learned, and keep learning, heaps from these models. The main thing we learn (and the reason why these models continue to be so useful and fundamental for progress) is that the ‘basics’ of brains are quite universal – and once we know those basics well, it is a lot easier to work out the specifics in more complex brains.

Hagmann P, Cammoun L, Gigandet X, Meuli R, Honey CJ, Wedeen VJ, Sporns O (2008) Mapping the structural core of human cerebral cortex. PLoS Biology Vol. 6, No. 7, e159 (CC-BY)

Trying to understand the architecture of circuits has proven to be of major value (and this is what the connectome is about). But building the connections is not just about drawing the wires – you need to build in some variability – some connections excite while others inhibit, some neurons respond in predictable linear ways, others don’t.  And when you are done with that, you will still need to start thinking about the stuff we have not spent a lot of time thinking about: those other cells (glia) and the stuff that exists in between cells (the extracellular matrix). More and more, we are being reminded that glia and extracellular matrix do more than just be there to support the neurons.

So it is not surprising to find some skepticism around these large brain projects. Over at Scientific American, John Hogan raises some valid criticisms about how realistic the ambitions of these projects are given the current state of neuroscience (read him here and here). Other lines of skepticism center around the involvement of DARPA in the BRAIN project (read Peter Freed’s views on that here or Luke Dittrich’s views here). Others criticize the lack of a clear roadmap (read Erin McKiernan’s views here). Others have expressed their concerns that too strong expectations on advancing our knowledge of the human brain will overlook the importance of exploring simpler circuits, something that had been stated clearly in the original proposal [2].

Is now the right time?

Back in the ‘90’s the decade of the brain had insinuated it would solve many of these problems, I don’t think it did. Despite the neuroscience revolution from about a century ago and the work that followed, we still have not been able to solve the mysteries of the brain.

But this decade is somewhat different. I am reading more and more stuff that has to do with the emergent properties of the brain – not just the properties of the neurons. And for the first time since I started my road as a neuroscientists I am being able to ask slightly different questions. I did not think that successful brain machine interfaces would be something I’d get to see in my lifetime. And I was wrong. Even less did I think I would get to see brain to brain interfaces. But the works is moving forward there too.

The BRAIN project is not alone. In Europe the Human Brain Project received similar attention. We all expect that such boosts in funding for multidisciplinary research will go a long way in making things move forward.

It is inevitable to think of the parallels of the approach to these Big Brain projects and the National Science Challenges – which are wonderfully expressed by John Pickering here.

I think that Erin McKiernan’s cautionary words about the BRAIN project might be quite appropriate for both:

Investing in neuroscience is a great idea, but this is not a general boost in funding for neuroscience research. This is concentrating funds on one project, putting many eggs in one basket.

[1] Brain Research through Advancing Innovative Neurotechnologies,
[2] Alivisatos, A. P., Chun, M., Church, G. M., Greenspan, R. J., Roukes, M. L., & Yuste, R. (2012). The Brain Activity Map Project and the Challenge of Functional Connectomics. Neuron, 74(6), 970–974. doi:10.1016/j.neuron.2012.06.006

#SciFund is up at RocketHub!

Posted in Science, Science and Society by kubke on November 2, 2011

What if scientists were to crowd-source funding for their research?

Yup, you heard right. Many scientists are asking the question: what would people, rather than established funding agencies, put their pennies on…..

And the answer will be found through the SciFund challenge. 49 projects, 44 days left to raise money on RocketHub.

From athlete’s foot, climate change, crayfish, cancer and … yes … zombies! you can find an array of projects looking for a donation.  So here is an invitation for you to head on to RocketHub and look at what scientists are asking help to fund.

And yes, here is my plug:

All the credit to Daniel Mietchen for putting this together! You can fuel this project here: Beethoven’s open repository of research.

So, come on, start clicking!

Changing my mind at Webstock

Posted in Science, Science and Society by kubke on February 20, 2011

I got into science because I love knowledge. But even more, I deeply love the creative process of creating new knowledge.

by Nasose3

Over the years I have become increasingly disenchanted and frustrated with the scientific system, because it seems less about that process and more about business.

I don’t like business. I think of business as a means to get money for money’s sake. And money does not motivate me. So over the past years, I have found it difficult to align my personal motivations with those of science.

To me science should not be about money. It should be about discovery, it should be about knowledge, it should be about transmitting that knowledge. (Students and post-docs should contribute to a lab, but they should also receive something from the lab to take with them. ) I cringe when I hear scientists choosing post-docs based on what they are going to ‘bring’ to the lab but not considering what the lab is going to do for ‘their’ careers. And I have been hearing this position much too often.

Don’t get me wrong. I don’t have an intrinsic problem with science being commercialized. It is the ‘for money’s sake’ that does not engage me.

A bit over two years ago I met Nat Torkington. One of the many things I learned from him is to step outside of my comfort zone, do things that I would normally not do, and see what happens. This year, my walkabout took me to Webstock.

Webstock is, well, as the name suggests, a kind of web conference: a mixture of TED and a rock concert and with a unique energy about it. And it ticked my 3 boxes:

I had fun

I learned something new

I changed my mind about something.

by h.koppdelaney

I changed my mind about business.

Business can be fun, can be creative, and can be motivated by things other than just the money. Hearing the speakers made me realize that it is not business that I am not into: it is the traditional business models I don’t like and I don’t fit in. I heard and talked to many young creative people, working on successful open source and not so open source projects and startups and every time I thought ‘Wow, it must have been special to be a part of that’.

So what is [one of] my take-homes?

Why can’t I run my science this way? Why not think of my lab as an Instapaper, Twitter, Catalyst IT, Silverstripe? And what would I have to change to make that happen? What is it about the teams that made these projects successful? How did these projects decide what features to include or not? When to release? What to release?

So I had fun, and I learned. But I also changed my mind. I can like business. Just the right kind of business.

Which takes me back to the drawing table.

In the meantime, the room collectively took notes on google.docs, if you want to take a look at some great messages.

Thoughts on Hunter’s statement on Science, Climate Change and Integrity

Professor Keith A Hunter has published a statment on ‘Science, Climate Change and Integrity‘ in the Royal Society of New Zealand website. His position with respect to the controversies surrounding climate change issues are made clear, as is his call for a re-examination of attitudes on both sides of the argument.

The controversies surrounding the science of climate change underscore the need for a more open approach in the reporting of scientific data, and Professor Hunter’s statement makes a strong argument towards moving in that direction. A while back Cameron Neylon [1] wrote in his blog in reference to the CRU email leaks that

[…] scandal has exposed the shambolic way that we deal with collecting, archiving, and making available both data and analysis in science, as well as the endemic issues around the hoarding of data by those who have collected it.

There are many arguments in favour of making scientific data openly available. In a recent commentary on Science, Jean-Claude Bradley is quoted as saying:

“It’s sort of going away from a culture of trust to one of proof,” Bradley says. “Everybody makes mistakes. And if you don’t expose your raw data, nobody will find your mistakes.”

Along those same lines Hunter argues that

“Science is a rational endeavour that is based on logical and critical analysis of scientific theories in the light of actual evidence. It follows that scientific information, including a transparent description of how the data has been processed and tested against hypotheses, must be publically available, especially when it has been publicly funded […]”

Without this open approach, the validity of scientific information has to be entrusted to the peer review system, but even Professor Hunter echoes what are concerns of the scientific community at large, that

“while we place great faith in the peer review process to weed out ideas that are wrong, peer review is not perfect and can be abused by both sides.”

And further argues that:

“If the intensity of the personal attacks on climate scientists over recent months are to have any positive effect, it will be the adoption of a more transparent approach to the dissemination of information.”

Two sides of the coin: Public/Open access vs Open Data:

Although the issues surrounding open data and open access can be seen to sit under the same umbrella, they really deal with two slightly different issues regarding the dissemination of information. While public or open access to published data is now a requirement by many public funding agencies, and is important for the public dissemination of information,  it does not in itself solve issues such as those raised around climategate.

I am a strong supporter of Open Access publishing — whereby the public has access to the published information — but unfortunatley it shares some of the same shortcomings with toll access publishing when it comes to the review process: the process is not fail-proof. The Public Library of Science has to be commended for opening the post-publication discussion of the work they publish and making it possible to highlight both shortcomings and strengths in the published material, and in this way allowing to correct any errors associated with the peer-review process. Yet even within this open model the criticisms are raised about the published material itself, and this solution falls short of solving the kinds of issues raised by the opponents of climate change: The raw data is not published by default (although it can be requested and it is PLoS policy that it should be made available by the authors).

Open Data on the other hand makes the raw data available: the analysis can be checked, rechecked, rehashed and reanalysed by other people. And as Jonathan Eisen is quoted as saying in the Science article, people do find mistakes.

Opening the data allows those mistakes to be found and to be corrected, and that can only be good since the ultimate goal of science is not to defend one’s pet theory but to keep one’s mind open to find the answer that is most consistent with the data. As Lawrence Krauss said in his lecture:

“I would argue that the definition of open-mindedness is forcing our beliefs to conform to reality, and not the other way around.”

Opening the data will inevitably lead to agreed upon interpretations that conform with reality: It allows the conversation to centre around scientific facts rather than around personal attacks to the scientific community or to the groups of vocal skeptics. And, ultimately, finding the best answer is the one common ground that is shared by both groups.

So what next?

Hunter states that:

[…] it is only fair to expect the critics of the mainstream scientific views […] to adopt an equally transparent approach with their own information, and with their own use and re-analysis of data entrusted to the public domain. They should also subject their findings to rigorous peer review. Opinion, however forthrightly expressed, will not change the laws of basic science.

As far as I know, there is still an Open Access Mandate to be had in New Zealand’s public research funding agencies. Let alone one on Open Data. So it was not without surprise that I read Hunter’s statment that

“In this regard, the Royal Society of New Zealand intends to play its part by developing a Code of Practice for Public Dissemination of Information that it hopes will assist the various New Zealand science organisations in improving their practices.”

This approach is needed if common ground is to be found between scientists and between scientist, society and policy makers. Disagreement is perhaps the strongest force that moves the interpretation of scientific data within the bounds of the most likely explanation. But disagreement can only move in a positive direction when all parties involved have equal access to information. In science that is called the data.

[1] Cameron Neylon’s blog has now moved here.

[2] Disclaimer: I receive and have received research funding from the Royal Society of New Zealand and am an Academic Editor of PLoS One.