As new technologies supercharge the field of bioacoustics, researchers can better listen to environmental changes — and use the information to guide conservation efforts.

Tan and reddish small bird on end of branch singing.

After Hurricane Maria tore through Puerto Rico in 2017, photos showed downed trees, flooded communities, collapsed homes and buckled roads. But what did the aftermath sound like?the ask

Ben Gottesman, now a member of the K. Lisa Yang Center for Conservation Bioacoustics at the Cornell Lab, was part of a team of researchers from Purdue University’s Center for Global Soundscapes and the National Oceanic and Atmospheric Administration that monitored changes in the soundscape on land and in the water to better understand how birds, bugs, shrimp, fish and other animals responded to the disturbance.

The work is part of the growing field of bioacoustics, which combines biology and acoustics to gain insight into the world around us by listening. It’s become a potent tool for research and conservation as recording devices have improved and gotten cheaper — and as machine learning can crunch massive amounts of data. That’s helped researchers from the Yang Center and other institutions better understand everything from right whales in the North Atlantic to tiny katydids in the canopies of tropical forests.

The Revelator spoke to Gottesman about which animals bioacoustics can help us study, how researchers sort through millions of hours of recordings, and why new technologies aren’t just for experts.

You did your Ph.D. in soundscape ecology. What is that?

It’s the study of sound in our environment and trying to understand places through how they sound. That can be learning about biodiversity through recording and analyzing the sounds from different ecosystems or doing a more comparative approach where you’re trying to understand what makes this tropical forest sound the way it does. What are all the sounds in a given place? How do they vary over space and time?

I think a lot of places have something to tell us about either environmental issues or interesting behaviors.

What can sound tell us about a changing world?

It can tell us a lot. You can look over decades where you see ocean noise levels doubling every 10 years, and that corresponds with this increase in shipping. These long-term anthropogenic stressors, a lot of them are tied to changes acoustically.

Likewise long-term changes in biodiversity also convey acoustically. There was a big study led by the Cornell Lab that found that 3 billion birds were lost [in North America] since the 1970s. I imagine that that’s led to a desaturation of dawn choruses, which is a peak period of biophony, or sounds produced by animals. Biodiversity loss carries an acoustic signature in many places with a desaturating soundscape or a loss of dynamics.

Then over shorter time scales, you have impacts such as logging or mining that can also have a large effect on the soundscape as well. Through that we can learn about changes to the animal communities.

In my work I studied the impact of Hurricane Maria. It’s not a direct human-caused disturbance, but there was a marked reduction in dawn chorus periods where usually you have a whole vibrant mix of birds that are singing. That declined sharply after the storm, likely signifying the initial damage wrought by this intense hurricane. The insect choruses were depleted as well.

But then we had these hydrophones recording just a few miles away, and there was very little change. The fish choruses were present during the night just like before. The snapping shrimp were still snapping away at very similar levels. That was one example early on that gave me this firsthand experience learning about how soundscapes can convey ecological changes.

People are trying to use passive acoustic monitoring as a tool to understand the degree to which places are being affected by all kinds of different stressors. But it can also [be used to understand] restoration. Acoustics is a really great way to understand what species are profiting from such restoration, how long it takes for places to bounce back, and what restoration methods are most effective given your management goals. My colleague Vijay Ramesh just published a paper about understanding the effectiveness of restoration using passive acoustic monitoring.

What kinds of tools are used for this?

There are passive recording technologies, and those are typically recorders with battery and storage that you can leave outdoors. The SwiftOnes that we make at the Yang Center can record for more than a month continuously. Underwater, the tech is more advanced. We’ve developed underwater recorders called Rockhoppers that can record for more than a year straight as deep as 1,000 meters.

There’s a lot of next steps or frontier areas. We’re working toward real-time detection and streaming of sounds. So let’s say you’re interested in some sort of human stressor like illegal logging or poaching, the ability to record, but also to analyze and then ping out what’s going on in real time — that’s an area that people are actively working on. We have some units in Hawai‘i that are doing just that, which is quite exciting.

One shortcoming of these fixed sensors is that they’re robust through time, but they don’t cover a big area. So to complement that, there are also acoustic gliders in marine environments. There’s some that just drift, and then others that you can program routes. We’re thinking about how that can take shape terrestrially, potentially using drones. That’s one area that people are thinking about in order to increase the spatial resolution of acoustic sampling.

How do you analyze all this data?

That’s one of the big challenges. When I think of even just a few months of data from a few sites, I get the image of a glacier of sound. How are you able to break that up into more manageable units or get some insights and analyze this mountain of data? Especially as projects scale, it’s increasingly important to have automated tools that can go through and find signals that you’re interested in or make automated measurements of the soundscape.

In the case of a bioacoustic monitoring project in the Sierra Nevada led by Connor Wood, each year it collects more than a million hours of sound from more than 1,000 sites. He’s worked with Stefan Kahl to create BirdNET, this very powerful algorithm for classifying bird sounds within an audio data set. That’s just one example of these machine learning tools that are changing the game and making it possible for us to analyze these enormous soundscape data sets.

You mentioned birds, which is what I often think of with acoustic monitoring.  What other species are we able to learn about now with acoustic tools that we have been missing before?

I can think of Laurel Symes’ work. She uses passive acoustics to understand the biodiversity and relative abundance of katydids in tropical forests. Most of these katydids live high up in the canopy, and their sounds are ultrasonic. So even if you’re there trying to survey them, you won’t be able to hear them.

Green bug on green leaf.
A round-headed katydid. Photo: Terry Priest (CC BY-SA 2.0)

But with passive acoustic monitoring, you can get a sense of their behavioral patterns and phenology, which is how their vocal activity changes over the course of the year. And then ultimately, the golden goose is to get a sense of how many of these insects are in these forests because they’re such a critical food source in these tropical forest communities.

That’s one example. But there’s so many. A large part of the work at the Yang Center is dedicated toward the marine environment in places like Antarctica or off the coast of California or Cape Cod Bay, which is home to the endangered North Atlantic right whale. These animals are extremely difficult to survey, if not for passive acoustic tools.

Especially in aquatic environments, whether it’s freshwater or marine, acoustics is really giving us a window into studying creatures that otherwise are really logistically difficult to survey.

Can this kind of technology be used by regular people?

Yes, and there’s a tremendous power in making tools accessible to a wider audience. That’s happening with these acoustic technologies. BirdNET, which I spoke about earlier, is an app that anybody can download and use to identify the birds singing in their environment. Merlin is another app from the Cornell lab that has the similar goal of recording and detecting different bird species on your cell phone.

As citizen science continues to be on the rise with eBird and iNaturalist, I think sound will become an increasingly large part of those efforts. You can go out and record the sounds of different species and archive them in this huge, publicly accessible library, called the Macaulay Library at Cornell.

Equipping people with the tools to do this automated classification of different sounds around them is actively happening now — mainly with birds — but that will expand.

Once you have names for things, it makes you appreciate them more and it’s a real portal toward facilitating more learning and engagement. I’m a big believer that sound has the power to do that.

Sound has captivated me, and it can really spark something in people. It could be sounds from Antarctica or it could be your backyard pond with water beetles clicking and bugs doing these rhythmic whirrs. The surprise and mystery can just captivate you and make you want to go outside — and hopefully do some recordings yourself.

Creative Commons

Previously in The Revelator

The Call of the Wild: Using Sound to Help Imperiled Species and Ecosystems

Tara Lohan

worked as The Revelator's deputy editor from 2018-2024. She is the editor of two books on the global water crisis and is working on a book about dam removal.