To tell a bird by its song

Issue: 
Network News Spring 2014, Vol. 27 No. 1
Section:
Site News

Scientists at the Andrews Forest (AND) Long Term Ecological Research (LTER) site have developed new computer algorithms that can identify birds from their recorded songs. The technology will address questions about bird phenology, such as the timing of their arrival, the daily time of first singing at a site, and ultimately, scientists hope, act as the canaries that will warn us about the possible onset and changes due to climate change.

About five years ago, ecologists at AND started collaborating with their computer scientist colleagues at Oregon State University (OSU) in a project to try and identify bird song recorded in the wild in an automated way. Though very clean individual bird songs have been successfully identified using computers in the lab, no one had done this in the wild, where many bird species sing together and amidst background noise from many sources.

The approach taken by AND and OSU scientists is important because each year, ecologists spend thousands of person hours in the field counting birds the old fashioned way, sitting or standing still for 5-10 minutes at a time and counting all bird species heard. This physical approach is very time consuming, is prone to observer biases, and allows for only very limited sampling. In contrast, a microphone and recorder can be placed at a location and left there to record bird songs all year round. 

We currently have recorders permanently placed at 14 sites across an elevation gradient at the Andrews Forest, so we can test how birds are responding to climate changes over short and long time periods. We use a commonly used recorder type—the Songmeter—to record bird songs in the wild. The algorithm, applied in C++ at the moment, is based in “computer vision”—which is basically a machine learning or artificial intelligence approach. Though it has been used before to automatically identify images, it has not been used with sound before. Currently we have more than 12 TB of data, representing four years of recordings, but we believe we have only begun to scratch the surface in terms of mining these data for ecological patterns.

Our team plans to use the auto-ID approach to collect a variety of interesting ecological data. For example, we can pick up on when a migratory bird first arrives at a site and starts to sing in the spring. Doing so will enable us, over time, to test whether spring arrival dates correspond to weather during a particular year, and whether arrival dates are shifting over time in response to climate change.

We will also be asking (and answering) more theoretical questions about how birds partition “sound space” (song niches) by altering the timing or pitch of their songs. The effort gives us an unprecedented high resolution look at song rates across species every day for the entire breeding season. For instance our most recent data shows, in 1-minute time periods, the time of day that each species peaks in its song at all of our sites across the Andrews LTER. From it we have learned, for example, that Varied Thrush’s singing peaks twice, once at 5:30 a.m. and again at 7 p.m.

This technology has only recently become available (see Briggs et al, 2012; http://www.fsl.orst.edu/flel/pdfs/Briggs_2012_JASA.pdf). Developed by OSU computer scientists (Forrest Briggs, Xiaoli Fern, Raviv Raich) and AND ecologists (Matt Betts, Sarah Hadley, Adam Hadley), we expect to offer it free to use through an interface such as the R statistical software. The next challenge will be to make the software user friendly, and perhaps even develop a smartphone app that allows ordinary users to auto-ID bird song. We believe this could lead to extensive and exciting citizen science efforts.

For more information, see http://www.audubonmagazine.org/articles/birds/what-do-birds-do-us?page=4 (scroll down to where Mathew Betts name is mentioned) and http://www.kgw.com/news/local/OSU-researchers-create-high-tech-way-to-monitor-birdsongs-156085895.html; and the original paper at http://www.fsl.orst.edu/flel/pdfs/Briggs_2012_JASA.pdf

Reference

Briggs, F., Lakshminarayanan, B., Neal, L., Fern, X, Raich, R., Frey, S.K., Hadley, A.S. and Betts, M.G. 2012. Acoustic classification of multiple simultaneous bird species: a multi-instance multi-label approach. Journal of the Acoustical Society of America 131: 4640-4650. PDF