Season 5: Episode 15
Whynotamus
New technologies like artificial intelligence have helped to accelerate and open up the entire world of bioacoustics, launching us into a new era of communication with the more-than-human world. In this episode, we explore the promise and perils of using AI in bioacoustics.
Guests
Aza Raskin
Aza Raskin is a National Geographic Explorer and co-founder of Earth Species Project, an international nonprofit dedicated to using AI to decode animal communication and transforming how human beings relate to the rest of nature.
Taylor Hersh
Taylor Hersh is a behavioral biologist broadly interested in the interplay among vocal complexity, social complexity, and culture in animals. She is currently a postdoctoral researcher in the Cetacean Communication and Cognition Group at the University of Bristol, where she studies bottlenose dolphin communication.
Sara Keen
Sara is a behavioral ecologist and electrical engineer specializing in acoustics, signal processing, machine learning, and the ecology of animal signals and societies. She received a PhD in Neurobiology and Behavior from Cornell University in 2020. Prior to joining Earth Species Project she worked in acoustic research labs at the National University of Singapore and the Cornell Lab of Ornithology and was a postdoctoral fellow at Stanford.
Ellen Garland
Dr. Garland completed her Ph.D. in Bioacoustics at the University of Queensland, Australia (2011) followed by a National Academy of Sciences (NRC) postdoctoral fellowship at the Marine Mammal Lab at NOAA (Seattle, USA) and a Royal Society Newton International Fellowship at the University of St Andrews (UK). In 2017, she started a Royal Society University Research Fellowship investigating the cultural transmission, vocal learning, and function of humpback whale song. Her board research interests include animal culture, social learning, bioacoustics, and behavioural ecology.
Credits
Special thanks to the following people and groups for the use of their whale recordings: Ellen Garland, Taylor Hersh, the Dominica Sperm Whale Project, the Hal Whitehead Lab at Dalhousie University, Valeria Vergara, the Glacier Bay National Park & Preserve, the Ships, Whales & Acoustics in Gitga’atTerritory Project, the Scripps Whale Acoustics Lab, and the Lofoten-Vesterålen Ocean Observatory. Thanks also to Marc Anderson for the elephant sounds which he shared on the website Xeno Canto. Threshold is made by Auricle Productions, a non-profit organization powered by listener donations. You can find out more about our show and support our work at thresholdpodcast.org.
Transcript
[00:00] INTRODUCTION
AMY: Listening to animals' voices can save their lives.
HUMPBACK WHALES SINGING
AMY: That's not an opinion, it happened in 1970, when a group of biologists released the album Songs of the Humpback Whale.
HUMPBACK WHALES SINGING
AMY: For people with long cultural ties to humpbacks, like Aboriginal and Torres Strait Islander people in Australia, and Inuit people in the Arctic, it was no surprise that they sang. But millions of other people around the world had never heard these sounds, and they were utterly transfixed. The album went multi-platinum. National Geographic sent it out to all subscribers. Some tracks were even blasted into space on the Voyager Golden Record—a message to any future alien listeners that there are multiple forms of intelligent life on Earth.
HUMPBACK WHALES SINGING
AMY: Songs of the Humpback Whale helped to bring an end to commercial whaling which was still decimating many species at that time. Listening to humpback voices changed people, and that changed the future of life on Earth for many whales.
HUMPBACKS
AMY: The album had the impact it did because whale songs are inherently powerful, but also because a new combination of technologies made it possible to record that power and share it widely. A marriage of hydrophones, portable tape recorders, and the blossoming recording industry allowed Songs of the Humpback Whale to break through the noise, and change peoples’ hearts.
AMY: Welcome to Threshold, I'm Amy Martin, and the question we're going to tackle in this episode is can we do this again. Can listening to animals with today’s emerging technologies startle us awake, and inspire us to change their lives for the better? Artificial intelligence is rapidly accelerating our ability to decode animal sounds—and we could be heading into an entirely new era of communication with the more-than-human world. Maybe, in the future, we won't have to guess what our dogs and cats are saying. We'll just run their barks and meows through an app on our phones, and it'll spit out a translation.
AZA: Our ability to understand is limited by our ability to perceive. What AI does is it throws open our ability to perceive and hence our ability to understand.
AMY: Aza Raskin is a cofounder of Earth Species Project, a nonprofit organization focused on using artificial intelligence for animal communication. The possibility of using these new tools to understand what more of our fellow Earthlings are saying feels incredibly exciting. But just like any tool, AI can be used to build and to destroy. In this episode, we're going to explore the promise and perils of using AI in bioacoustics through conversations with Earth Species Project leaders and two scientists who are studying some of our oceans' most mysterious talkers.
THEME MUSIC
[03:51] A SEGMENT
AZA: We are using AI to learn how to listen to and communicate with the rest of nature.
AMY: Again, this is Aza Raskin. One of the co-founders of Earth Species Project.
AZA: And then people normally'll be like, OK, what does that mean? I'm like, OK, talking with animals. It means we're learning how to communicate with like whales and orangutans, Egyptian fruit bats, zebra finch. And actually, even when I say that, I didn't get it quite right because it's not really about learning how to talk to. It's about learning how to listen to and learn from.
AMY: Aza was born with an inside track to the tech industry. His father, Jeff Raskin, led the development of the Macintosh for Apple, so one might expect to see Aza inhabiting the C-suite of a company like Meta or Google. But instead, he's become something of a tech watchdog. You might have seen him in the film The Social Dilemma, about the corrosive impacts of social media on our mental health and democratic institutions. And in addition to Earth Species Project, Aza co-founded the Center for Humane Technology, a think tank of sorts, dedicated to educating the public about the impact of technology and shift policy around tech to serve the common good. Given all this, it is in some ways surprising that Aza has founded an organization which is championing the use of what some are saying is the most dangerous technology humans have ever created—artificial intelligence. But he believes if AI is harnessed to the right sort of incentives, it could help us build bridges with the rest of life that we urgently need.
AZA: What Earth Species is doing is we are building these, you know, large scale models to try to decode and understand the communication systems of other species. And our hope is that by doing that, we can shift fundamentally the perspective that humanity has about its role on Earth and in the universe.
AMY: I talked to Aza in the fall of 2023, which in terms of the pace of AI development feels like a couple of centuries ago. But my goal wasn't to get breaking news from him; I wanted to hear about the ideas that led him to found Earth Species Project in the first place—to understand his incentives.
AZA: The biggest hope that I personally have for Earth Species is that it's part of whatever needs to happen so that we can recognize when we are disconnected from ourselves, from each other, from the natural world.
AMY: Aza envisions a future in which our ability to decode the communication of whales or elephants or maybe naked mole-rats could blow open boundaries in our minds that we're not even aware we have.
AZA: There was this moment in 1995, when the Hubble telescope got pointed at an empty patch of the sky. And it was actually a big risk for the astronomer that did that, because you're like, why would you point the telescope at a place where there is nothing? But what he discovered was everything. It was the most galaxies that had ever been seen in one spot. We're going to take this new tool and point it at places that science and Western society have thought are empty. And what we're going to discover is everything.
AMY: So these are very big ideas and they bring up a lot of questions which we’re going to come back to throughout this episode. But before we go any further into the philosophical realms here, I want to explore how this actually works—how AI is being used in bioacoustics, by researchers on the ground. Or in the water.
SPERM WHALE CODAS
TAYLOR: So those are the vocalizations they make when they're socializing.
AMY: I'm sitting next to bioacoustics expert Taylor Hersh, listening to the clicks of a sperm whale. She tells me each group of clicks is what scientists call a coda.
TAYLOR: So it's these sort of patterned series of clicks. And the closest human equivalent is Morse code.
SPERM WHALE CODAS
TAYLOR: All sperm whales all over the world make codas. But it's the types of codas or the rhythmic patterning of the codas that differs.
AMY: Taylor's not affiliated with Earth Species Project. She was based at a university in the Netherlands when we met, now she's a research fellow at the University of Bristol, in the UK. Earth Species is just one of many organizations and academic institutions working on using AI to decode animal communication. Taylor says got interested in this work because she's always loved animals, and because…
TAYLOR: I think I've always liked to eavesdrop.
MUSIC
AMY: So put yourself in the shoes of a researcher eavesdropping on these whales, trying to make sense of these sounds. What do they mean to the whales? How can we be sure that they mean anything at all? How on Earth do you begin to decode a coda? Well, Taylor says the first step in answering these questions is not sitting down at the computer. You have to get out in the field, and spend time with the animals, observing how they live their lives. She got to know sperm whales through the Dominica Sperm Whale Project, a long-running research project based in the eastern Caribbean.
TAYLOR: Sperm whales, I think, are one of the weirdest looking whales. A third of their body is basically nose.
AMY: They live all over the world, and they're the biggest of the toothed whales—they can be up to 20 meters, or 65 feet long—and they spend the bulk of their time deep underwater. They can hold their breath for more than an hour.
TAYLOR: They have these really bulbous heads and that is sort of their superpower.
AMY: Those enormous heads house a complex sound production and receiving system which allows them to see with sound in the lightless depths of the ocean. And although these clicks sound quiet here, they're actually the loudest sounds produced by any animal on the planet. If you were in the water close to a whale, these clicks would hit you like intense beats of pressure getting pumped out of a huge speaker, rattling your bones.
SPERM WHALE
AMY: Sperm whales click to find squid and other food, but also to communicate with each other. They're very social—they live a lot like elephants do, in female-led pods of ten to twenty tightly connected individuals.
TAYLOR: Family is really important to sperm whales. Daughters will stay with their mothers for life, and they form these things called social units, which are like family groups.
AMY: Those family groups are then nested within larger clans that might include tens of thousands of whales. And one of the ways they define clan membership is through these patterns of pulses, or codas. They're made primarily by the females, Taylor says, and they seem to have something to do with reinforcing bonds.
TAYLOR: You rarely have just one animal making one coda one time. It's more like a conversation, but the conversation is kind of going like, “we're friends,” “we're friends,” “we're friends,” “we're friends.”
AMY: She says some codas are universal. They're used by sperm whales around the world, while others are extremely clan-specific. For example, let’s take two groups of sperm whales that live near the Galapagos Islands, known as the Regular Clan and the Plus One Clan. Here are some codas from the Regular Clan…
SPERM WHALE CODAS: REGULAR CLAN
AMY: And the Plus One Clan…
SPERM WHALE CODAS: PLUS ONE CLAN
AMY: Did you hear the difference?
TAYLOR: Regular clan whales make a four click coda: click, click, click. Or they could make a six click coda: click, click, click, click, click, click.
AMY: The Plus One clan might also make a four-click coda.
TAYLOR: But it sounds like click, click, click...click. Or they'll make a coda with six clicks, but it sounds like: click, click, click, click, click...click.
SPERM WHALES
TAYLOR: Only whales from that clan will make that coda type, and whales from other clans pretty much never will.
AMY: In other words, a sperm whale clicks with its clique. And that led Taylor and other researchers to wonder if they might be using these sounds a type of symbolic marker—an arbitrary symbol that denotes membership in a cultural group. We humans use these all the time; we might pin a religious symbol onto our clothing, or tattoo a gang sign onto an arm. And symbolic markings can also be embedded in how we talk.
TAYLOR: If we're in the U.S. and I say, y'all, you have a guess as to where I'm from, or if I were to say yinz, you might not know, but that means you're from Pittsburgh, Pennsylvania. (laughter)
AMY: I did not know.
TAYLOR: Yeah. So there's like these tricks that we can use to figure out what cultural groups people are from or identify with. So the question has been for a while whether animals also do that.
AMY: Many sperm whale experts suspected the answer was yes, but the only way to know for sure was to compare a lot of recordings of codas from lots of different clans. So for her PhD, Taylor decided to pool as much data as she could to try to find out if sperm whales were using codas as symbolic markers.
TAYLOR: So we were able to get sperm whale researchers from across the Pacific Ocean. You know, researchers off of Japan, researchers who had recordings off of Tonga, who had recordings off of Canada and Chile. And we were able to compare the codas being made in all of those different regions.
AMY: And as she analyzed the data, some really interesting patterns started to emerge. The sperm whales used their clan-specific codas more often when other clans were close by. And that indicates that these whales are using these sounds kind of like the way we use flags—as symbolic makers of group identity. Taylor and her collaborators published the results of their study in 2022, and she believes it was the first quantitative evidence of a non-human animal using a symbolic marking to denote culture.
TAYLOR: This was exciting because as far as we know, it's one of the first really conclusive pieces of evidence that at least some animal that isn't a human can do this as well.
AMY: To us, these codas sound like fairly nondescript drumbeats, but they apparently contain core information about belonging and relationship for the whales: who you fit in with, where you live. And again, in order to figure this out, Taylor needed lots and lots of data. But she also needed tons of time to sort through all the recordings.
TAYLOR: Yeah, the problem that bioacoustics as a field is facing right now is that we've been able to develop microphones and hydrophones and ways to record massive amounts of sound. You can put a hydrophone into the ocean and leave it there for a year, recording every day. And when you get that back, though. How do you begin to make sense of all this data you've collected?
AMY: I heard some version of this from almost every bioacoustics researcher I interviewed for this season. Our capacity to collect sound has exceeded our ability to categorize and organize it, let alone understand it. Lauren Hawkins talked about this with her fish choruses, Joyce Poole with her decades of elephant sounds, Gabriel Jorgewich-Cohen and his turtles.
TAYLOR: I've listened to probably thousands of hours of recordings of sperm whales making codas and then manually— through a software, but still manually—gone in pulled each coda out of those files. So that's a huge time investment and it's also just not feasible anymore with the amounts of data we're collecting.
AMY: So this is where artificial intelligence starts to come into the picture. One relatively basic way it can be used in bioacoustics is by sifting through recordings, and telling scientists where the good bits are.
TAYLOR: So there's been really beautiful partnerships forming between these different research communities, you know, hardcore acoustics people, but also machine learning and AI people to especially find out if there are ways that we can take hundreds of hours of recordings, plug it through a system, tell it we're looking for sperm whale coda clicks, and have it tell us, “OK, go to this time point in this file, this time point in this.”
AMY: Earth Species Project is one of these emerging partnerships. Another is the Cetacean Translation Initiative, known as Project CETI, with a “C.” This team of biologists, linguists, machine learning and robotics researchers, has made some incredible breakthroughs with sperm whale communication. They developed an AI model that mimics how human babies learn to talk— it just listens to sounds and has to discover language patterns on its own. No a priori information about what’s meaningful or important. And when they fed the AI recordings of sperm whale codas, it did identify the rhythmic patterns that leap out to our human ears:
SPERM WHALES
AMY: But it also picked up something really unexpected: varying patterns of pitch. Tonal changes embedded in the clicks. They’re hard for us to hear, but after the AI model detected them, the CETI researchers dug deeper, and sure enough, they were there. And when they started analyzing them, they realized that they’re very similar to our vowels.
MUSIC
AMY: The implications of this are pretty mind-boggling, on multiple levels. First: sperm whales might have vowels? That means these codas may be conveying information not only through rhythm, like Morse code, but also through things that are a lot more like our words. Maybe for the whales, the variations in the codas are kind of like the differences between you, y’all, and yinz.
AMY: So the content of this discovery is fascinating, but so is the process behind it. Project CETI didn’t tell their AI model to find something that we’ve decided is important in another animal's communication—they told it to discover what was important about it. And it did. It allowed it to reveal a layer of potential meaning in sperm whale communication that was previously unknown. What other intriguing information is waiting for us in the recordings of sperm whales, and other whales, and all vocal animals? What else are we missing that artificial intelligence might be able to help us find?
SARA: Yeah. I think it's one of those cases where you learn a little and your mind is blown open by like, what other possibilities might exist.
AMY: Sara Keen is a senior research scientist at Earth Species Project. I asked her to project 25 or 50 years into the future, and guess what might be possible.
SARA: On a really tangible level, I think it's realistic that in a few decades we'll be able to hold up a phone and know what elephants are communicating about. If they're telling their family that it's time to move to the next watering hole, for example, if they're afraid of poachers nearby. I think we're going to be able to listen to ecosystems and catalog every species that's vocalizing. I think it also makes people want to listen more, when you learn that even one species has a highly evolved and complex way of communicating, you start to think, well, what others do? And so that that could just encourage humans to be better listeners overall. I am hopeful. I really...I think that it's going to our understanding and expand our empathy.
AMY: But she says one important thing to keep in mind here is that AI is not going to solve anything on its own.
SARA: We really value working with professionals who have been in the savanna watching elephants for decades of their life, and that's still where the source of knowledge will live. And that won't change once we have AI models analyzing the data they collect.
MUSIC
AMY: So the goal of Earth Species isn't to replace field research, or researchers. It's to help them make use of their knowledge and data.
SARA: They would have to hire and train a huge team if they were ever to do it manually. So hopefully we're helping people get to these answers faster.
AMY: The specific work Earth Species Project does varies quite a bit from researcher to researcher, Sara says. It's all based around the particular problems they're trying to solve.
SARA: We've been approached by several researchers and they say we have terabytes of data. I would love to answer these questions. Can you help me start to sort through it?
AMY: One of those people is Joyce Poole, the elephant researcher we met earlier this season.
SARA: She has an intuition that elephants use different signals, to convey different messages to others nearby, so almost like some kind of dictionary that she has in her mind of the different elephant vocabulary. And we are trying to train a model to see if that's true.
ELEPHANT SOUNDS
AMY: When we hear about AI, we're really hearing about all sorts of different things: large language models, neural networks, machine learning. But a central concept in all of them is pattern recognition. That’s what Sara means by “training a model:” finding a meaningful pattern in one little nugget of elephant communication, and then extrapolating out from there.
SARA: I think it's kind of like a decoding problem. In the elephants, we think that they use a call that says to the others, let's go, let's get out of here. And the whole group starts to move. If we have that one translation, can we build out others, using the same relationship?
AMY: But the really tricky thing is getting that first nugget. And knowing if you're getting it right.
SARA: It's a hard problem because there's no ground truth. We don't know what a word is for an elephant.
AMY: Right, yeah.
SARA: So this is one of the big problems, and one of the reasons we don't have these benchmarks that have helped machine learning take off in so many other areas.
AMY: So that's part of what Earth Species is trying to do, is create these benchmarks for a bunch of different species?
SARA: Mmm-hhm, yeah, it's supporting researchers who are doing that.
AMY: And that support could make a pivotal difference in the lives of many animals, Sara says. Earth Species is also working with Valeria Vergara, one of the scientists we met in our last episode. She came to them with her of recordings of beluga whales from the St. Lawrence River.
SARA: This is a very endangered population of belugas. And they have managed to get recordings from the majority of the whales that live in this region. And they want to see if we can identify the different whale's voices from audio alone.
BELUGAS
SARA: If we could do that, we might someday be able to do a survey of a region and count the number of individuals that are there, and build a new conservation plan based on that estimate. We could also do acoustic analyses to look for differences in the voices of the whales and see if they are in different social groups.
AMY: These aren't just fun facts—Sara’s hopeful that this kind of information could really make a difference for this population. It could help scientists get a better understanding of how much space these animals need, how the population numbers are rising or falling, and how our noise is affecting them. But if we want to make a difference, we have to act fast.
SARA: I feel an urgency, especially with these belugas in the St, Lawrence where the population is dwindling, or forest elephants in western Africa that are continuing to suffer from poaching. And their population size is really small and unsustainable.
AMY: We're in an extinction emergency right now. The people who study wild animals, and have devoted themselves to protecting them, are feeling enormous pressure. And many of them seeking whatever tool they can find to wake us up to what we're losing, and inspire more people to do something about it. Maybe AI can be one of those tools. But even as I hold that hope, I'm also aware that in general, the development of AI has wildly outpaced society's ability to contain its potential harms. So I asked Sara for her take on the possible dangers of AI in this context, with animal communication.
SARA: There are risks that I see also of using AI models. If we put too much trust in models, we could misinterpret the data. We could think that animals are saying something they're actually not.
AMY: And that could lead to high-consequence mistakes in our responses—conservation plans based on the wrong information. People could also use AI to coerce or control animal movements, Sara said. Another issue is the massive amount of electricity needed to power the data centers that are running all of these complex processes, and the water needed to keep them cool. Some data centers consume millions of gallons every day.
AMY: And as much as I hate to do it, I can think of other damaging uses of AI too. If we can communicate with animals, we can lie to them—trick them into doing what we want instead of what they want. Steal their agency and autonomy in all sorts of ugly ways. And we can love them to death—overwhelm them with our desire for contact. Like…what if we could hold up a phone and eavesdrop on elephant communication? And what if that leads to millions of people descending on their habitat, phones out, hungry to acquire their few moments of elephant conversation to post on Instagram? As much as communicating with animals sounds thrilling to us, will it actually be good for them?
We'll have more after this short break.
Break
[27:40] B SEGMENT
SONG: “Talk to the Animals” from 1967 film, Dr. Doolittle
If I spoke slang to orangutans
The advantages any fool on Earth could plainly see
AMY: Welcome back to Threshold, I'm Amy Martin, and this is Rex Harrison as Dr. Doolittle, contemplating the possibilities of achieving one of humanity's longest held dreams: talking with the animals.
If I were asked to sing in hippopotamus
I’d say whynotamus, and would!
AMY: In the story, Dr. Doolittle does learn how to speak hundreds of animal languages with the help of his parrot, Polynesia.
A man who walks with the animals
And talks with the animals
Grunts and squeaks and squawks with the animals…
This is the most exciting thing that’s ever happened to me, Polynesia
I can’t wait to start!
AMY: So what if we could sing in hippopotamus, should we? Why, or whynotamus? I've been thinking about this a lot as I put this season of our show together, and all I've been able to land on so far is that this is really complicated. I think there are really powerful positive and negative possible outcomes here. And one of the animals that embodies them both is the humpback whale.
HUMPBACK
AMY: We’ve taken big strides toward understanding what humpbacks are saying lately, in part because of AI. But if we use it carelessly, AI could also do incredible damage to humpback songs, and to the animals themselves. And that is a future that Ellen Garland is determined to prevent, because listening to whales is at the center of her life.
ELLEN: Yeah, I mean it's a passion. It's an absolute passion. And I never feel so calm or so happy as being out on the boat with the animals. Actually being there and watching them interact and those about, but also listening to them at the same time, it's so satisfying.
HUMPBACK
AMY: Ellen is a cetacean researcher based at the University of St. Andrews in Scotland. Just as a reminder: cetaceans are whales, dolphins, and porpoises.
ELLEN: Mostly I focus on humpback whales and the vocal display and the cultural transmission of their song.
AMY: She’s originally from New Zealand, but, like the humpbacks themselves, she migrates all over the world, spending parts of the year in front of the computer analyzing whale song, other times out in the ocean recording them.
ELLEN: There are so many unanswered questions which I find fascinating and exciting that we still don't know the answers to these.
AMY: Humpbacks live in all the world's oceans, generally moving between feeding waters at the poles, and breeding areas closer to the equator. Their songs can be anywhere from five minutes to half an hour long.
HUMPBACKS
ELLEN: Most people have heard all the beautiful melodic moans, but they also make some horrifying screeching sound sounds and like really strange things you wouldn't think a humpback would make. But that's can be part of their song.
AMY: The songs are divided into different parts, which scientists refer to as units, phrases, and themes.
ELLEN: So think of it as movements in a symphony to make these big songs.
AMY: The whales sing these different elements in a particular order, over and over. And although both male and female humpbacks communicate acoustically, the songs appear to be sung exclusively by the males.
ELLEN: So there will always be a little bit of variability on the theme within each individual and between. It's not perfect carbon copies, but in general they'll sing the same themes, usually in the same order.
HUMPBACKS
ELLEN: But what we know is within a population, all males will sing roughly the same song. So they're matching each other's song within a big population. And then the song changes through time. And all the males make these similar changes to maintain that conformity.
AMY: This is an extremely anthropomorphic way of putting it, but it's almost like a slight change to a song becomes cool, and then all the other guys want to sing it. So I asked Ellen the obvious question: why?
ELLEN: Exactly! Why on earth does your neighbor's song matter to you, and why would you take that up? Well, it appears that humpbacks have this huge drive for novelty. They like novel songs. And the underlying driver of that, you know, we hypothesize, would be female choice for novel songs. Maybe that conveys information to the female, but we don't know yet.
AMY: And Ellen says there are even more fundamental questions still to be answered. It's been more than 50 years since Songs of the Humpback Whale was released, and as we've been discussing, the field of bioacoustics has grown and changed dramatically. But we still can't answer the one question everyone wants to know when they hear these sounds: what are these animals saying?
ELLEN: Why are males singing? What is the function of song? I think that's the huge question right now.
AMY: Since creating and singing these songs is a sex-specific behavior, Ellen and many other researchers believe it must have something to do with courtship—that the males are displaying something about their fitness to potential mates. But…
ELLEN: How does this work? If you're wanting to display to a female, you want to stand out. So you want to be different. So we have a display that is both the same and different. And that's kind of perplexing.
AMY: And to add to the mystery, acoustic communication doesn't have to serve just one purpose. The whales could be aiming their songs at a variety of listeners—potential mates, rivals, or friends, or all of the above, simultaneously. I mean, think about the Rolling Stones. You could describe them as a group of guys gyrating for the ladies, competing with each other for attention, or bonding as a group. But probably, it's all of those things, at once. All these unsolved mysteries are far from daunting to Ellen though. On the contrary, they’re a big part of what drives her.
ELLEN: Aspects of technology have differently come forward to be able to start asking these really intense, deep, interesting questions about why they're communicating what's being communicated and why they're doing the things they do. So I find this a really exciting time where we're kind of at the cusp of things are starting to really open up and we're all contributing to this wider understanding of complex communication in animals.
HUMPBACKS
AMY: She's made a lot of contributions herself. In 2011, she published research showing that whale songs move from population to population, west to east, kind of like the way hit songs move between groups of people around the world. A follow-up study showed that the songs move all the way across the Pacific, from the east side of Australia to the western coast of South America.
ELLEN: So we've got the song and it's changing and evolving through time, say, in the east Australian population. And then what we found during my PhD is that the song was then passed to the next population, which is New Caledonia. And then over to the next, which is Tonga, and then into the Cook Islands and into French Polynesia in this big network of song changes. So the whole song was just taken up and whales would abandon their current display for this brand new one. And it's insane.
HUMPBACKS
ELLEN: The males are singing, this complex, ever-changing, intricate display. And it's this huge cultural network that is only rivaled by what we do as humans. The closest we can come in analogies is human culture. Rapid pop song changes, rapid fashion changes. No other animals are doing these rapid cultural changes at this vast scale.
HUMPBACKS
ELLEN: The song is also continually evolving so it will be transmitted, to the next population, change a little bit and then the next one. And so you've got this whole trajectory of song change.
AMY: So many questions follow from this. Does the song keep getting passed eastward? Does it eventually circle the whole world? Why is it getting passed this way from group to group? And the recurring mystery at the heart of it all: what are they saying?
AMY: Some of Ellen's latest research gets us closer to answering that. With the help of an international, multi-disciplinary team, she fed eight years of field recordings of humpback songs into a computer and told it to listen for component parts, kind of like how human babies listen and learn what words are. In our languages, one of the signature patterns that emerges from this process is that the most frequently used word is used twice as much as the next most frequent word. So In English, “the” is the most common word, and “of” is second, which means “the” is used twice as much as “of.” Weirdly, this pattern repeats across all known human languages. And when the AI listened to all that humpback whale song, and returned the data to Ellen’s research group, their jaws dropped. The exact same pattern showed up there, too.
AMY: Language and culture are two things that western science has historically tried to claim as uniquely human inventions. But through this combination of field work, large data sets, and AI, evidence is growing that both humpback and sperm whales have them too. And these are just two species of whales, out of more than 90. And whales are just one of thousands of kinds of animals using acoustic communication. What will we learn in the future about the conversations among dolphins? Elephants? Chimpanzees? Ravens? It seems increasingly likely that we don't just share the planet with other species, but with other civilizations. Complex, more-than-human cultures that we barely understand.
ELLEN: There is huge value in just listening to these animals. They have this culture, this vast cultural transmission network at an ocean basin scale. We have no idea what discoveries are going to come in the future.
AMY: Which brings me back to the question of potential harms of our interventions. Whales were singing and coda-ing in the ocean for tens of millions of years before our species even existed. Their communication is precious—we don’t have to be able to decode it to know that. But when we look at the history of colonization within our own species we can see how quickly languages and cultural knowledge can be obliterated.
AZA: Humpback whale song goes viral.
AMY: Again, Aza Raskin.
AZA: Right? Like for whatever reason. If we just create a synthetic whale that sings. We may well create like a viral meme, like a new song that takes over disrupting a 34 million-year-old wisdom tradition. We should not do that.
AMY: Of course he's right. And we shouldn't do a lot of other things, too, like overheat their waters with our fossil fuel emissions, or fill the ocean with plastic and noise. But we are. Maybe AI-assisted communication with these animals will spark another moment of reckoning, like Songs of the Humpback Whale did in the 1970s. But it could also become another item on a long list of ways that we are failing these animals.
AZA: That ability for AI to totally mess up an animal's culture. I think we have to take very seriously. For whatever reason, we don't take as seriously the ability for AI to mess up our culture. But put that aside for a moment. And I think what this means is that we're going to need. I mean, I'm just sort of speaking off the top of my head. But like a Geneva Convention for cross-species communication, we're going to need, you know, Star Trek, the prime directive. We're going to need a way of talking about how do we show up to these newfound powers with new responsibilities. And honestly, I'm really excited for those conversations because to have those conversations is to begin the shift of our relationship for the rest of nature.
AMY: Those conversations are happening. People in this field are working on building guardrails. One of the leading hubs is the More-Than-Human-Life program at New York University, where legal scholars and scientists are collaborating to think deeply about the ethics of using of AI in animal communication. But powerful new tools tend to attract attention from all sorts of people, including those who couldn't care less about ethics or law. Talking through the threats of AI was interesting and not at all reassuring. He sees a lot of things to be worried about, and it's all in the motivations driving development, he says. Like, take a corporation that uses AI simply to make as much money as quickly as possible.
AZA: Their incentive is to increase their capability. To increase their power. So they can gather more resources and make more money in a recursive loop. They're just going to do it again and again and again. And it's just going to take all of the bad incentives that our current system runs on and super power them. And that to me is like the scariest thing about A.I. is that we've built this machine called civilization, and when you pedal the pedals of this machine, like, yeah, we get skyscrapers and great health care and a lot of amazing things, but we also get like water that we can't drink, and we get polluted and decimated oceans, and we get a biodiversity crisis. Those things aren't actually side effects. Those are primary effects of the system. And if we make the pedals go faster with AI, what do we expect to happen? They're just going to blow past the whole of the planetary boundaries. So I want to reaffirm your fear.
AMY: Thanks.
AZA: I'm great at cocktail parties.
AMY: But that's not a reason not to use AI, he says. It's a reason to infuse it with different incentives. That's what Earth Species Project is all about. He believes if we deploy AI with different incentives, we'll see—and hear—different results.
AZA: There’s this core pattern of disconnection. Like, that's core thing that if we could shift about how human beings operated in the world, I think would solve a lot of our problems. And again, I just want to say it's not like I think our one project can possibly do all of that. But I do think that shift in perspective can unlock a whole bunch.
AMY: At its core, the whole study of animal communication, with or without AI, is rooted in this willingness to shift perspectives. Just think about everything we've learned this season. Treehoppers can pass secret messages through plant stems. Flowers can be ears, carefully attuned to the sound of buzzing bee wings. Elephants are consulting each other from a mile away, through rumbles too low for us to hear. An essential part of all of these discoveries was humility—a surrender of our human expectations of how communication happens, and why. Aza thinks AI could be used to supercharge that ability to shift perspectives this way, and learn things that don’t fit neatly into our conceptual frameworks. Including getting a new view of ourselves.
AZA: There's a kind of rite of passage that I think humanity has to go through. When we wake up to all the places that we have not wanted to send our attention, where we can really listen to the harm that we're causing, and what is a rite of passage? A rite of passage is generally a time that as an adolescent you face death and in so doing you come to a new maturity. And I think Earth Species, my hope is like some part of that rite of passage of letting go, of some of our ego, of being able to hear some of the harm so that we may make a different choice.
AZA: AI is sort of like the telescope of our era, and when the telescope itself was invented, we looked at the patterns of the universe and what did we discover? We discovered that Earth was not the center. And now I think we're going to point our new tools, and AI out at the patterns of the universe. And what are we going to discover? That humanity is not the center.
AMY: I do think AI could help us to decenter ourselves. But so can swimming in the ocean, or going for a hike, or just sitting in a park, watching and listening to birds. And those things have the added benefit of posing no risks to the lives around us. I’m not saying this to try to throw cold water on the project of decoding animal communication. I'm just trying to flag that we don't have to rely on technology to connect with the rest of nature. If we want to be awed into reverence and respect for our planet-mates, all we really need to do is give them more of our attention. Maybe, hopefully, we'll use these new tools to do that.
AZA: There's already a rich soil for people wanting the world to be a different way. And if we just have a couple sparks at the right time, maybe those moments really can become a movement which lets us shift the fundamental incentives and the race that we're all trapped within.
Credits
This episode of Threshold was written, reported, and produced by me, Amy Martin, with help from Erika Janik and Sam Moore. Music by Todd Sickafoose. Post-production by Alan Douches. Fact checking by Sam Moore. Special thanks to the following people and groups for the use of their whale recordings: Ellen Garland, Taylor Hersh, the Dominica Sperm Whale Project, the Hal Whitehead Lab at Dalhousie University, Valeria Vergara, the Glacier Bay National Park & Preserve, the Ships, Whales & Acoustics in Gitga’atTerritory Project, the Scripps Whale Acoustics Lab, and the Lofoten-Vesterålen Ocean Observatory. Thanks also to Marc Anderson for the elephant sounds which he shared on the website Xeno Canto. Threshold is made by Auricle Productions, a non-profit organization powered by listener donations. You can find out more about our show and support our work at thresholdpodcast.org.
Threshold Newsletter
Sign up to learn about what we're working on and stay connected to us between seasons.