Google is leveraging artificial intelligence to decode dolphin communication, aiming to facilitate future interactions between humans and these intelligent marine mammals.
Google is embarking on an ambitious project to harness artificial intelligence (AI) in an effort to decode the complex communication of dolphins. The ultimate goal is to enable humans to converse with these intelligent creatures.
Dolphins have long been celebrated for their remarkable intelligence, emotional depth, and social interactions with humans. In collaboration with researchers from the Georgia Institute of Technology and the Wild Dolphin Project (WDP), a Florida-based non-profit dedicated to studying dolphin sounds for over 40 years, Google is developing a new AI model named DolphinGemma.
The WDP has spent decades correlating specific dolphin sounds with various behavioral contexts. For example, signature whistles are often used by mothers to locate their calves, while burst pulse “squawks” are typically associated with aggressive encounters among dolphins. Additionally, “click” sounds are frequently observed during courtship or when dolphins are pursuing sharks.
Utilizing the extensive data collected by the WDP, Google has created DolphinGemma, which builds upon its existing lightweight AI model, Gemma. This new model is designed to analyze a vast library of dolphin vocalizations, identifying patterns, structures, and potential meanings behind these communications.
DolphinGemma aims to categorize dolphin sounds in a manner akin to words, sentences, or expressions in human language. By recognizing recurring sound patterns and reliable sequences, the model can assist researchers in uncovering the hidden structures and meanings within dolphin communication, a task that previously required significant human effort.
According to a blog post from Google, “Eventually, these patterns, augmented with synthetic sounds created by the researchers to refer to objects with which the dolphins like to play, may establish a shared vocabulary with the dolphins for interactive communication.”
The technology behind DolphinGemma leverages Google’s Pixel phone capabilities, specifically its advanced audio recording technology. This technology allows for high-quality sound recordings of dolphin vocalizations by effectively isolating dolphin clicks and whistles from background noise, such as waves, boat engines, or underwater static. Clean audio is essential for AI models like DolphinGemma, as noisy data can hinder the AI’s learning process.
Google plans to release DolphinGemma as an open model this summer, making it accessible for researchers worldwide to utilize and adapt for their own studies. Although the model is currently trained on Atlantic spotted dolphins, it has the potential to be fine-tuned for studying other species, such as bottlenose or spinner dolphins.
By providing tools like DolphinGemma, Google aims to empower researchers globally to explore their own acoustic datasets, accelerate the search for communication patterns, and collectively enhance our understanding of these intelligent marine mammals, according to the company’s blog.

