Google is leveraging artificial intelligence to decode dolphin communication, aiming to facilitate human interaction with these intelligent marine mammals in the future.
Google is embarking on an innovative project that harnesses artificial intelligence (AI) to explore the intricate communication methods of dolphins. The ultimate goal is to enable humans to converse with these intelligent creatures.
Dolphins are celebrated for their remarkable intelligence, emotional depth, and social interactions with humans. For thousands of years, they have fascinated people, and now Google is collaborating with researchers from the Georgia Institute of Technology and the Wild Dolphin Project (WDP), a Florida-based non-profit organization that has dedicated over 40 years to studying and recording dolphin sounds.
The initiative has led to the development of a new AI model named DolphinGemma. This model aims to decode the complex sounds dolphins use to communicate with one another. WDP has long correlated specific sound types with behavioral contexts. For example, signature whistles are commonly used by mothers and their calves to reunite, while burst pulse “squawks” tend to occur during confrontations among dolphins. Additionally, “click” sounds are frequently observed during courtship or when dolphins are chasing sharks.
Using the extensive data collected by WDP, Google has built DolphinGemma, which is based on its own lightweight AI model known as Gemma. DolphinGemma is designed to analyze a vast library of dolphin recordings, identifying patterns, structures, and potential meanings behind the vocalizations.
Over time, DolphinGemma aims to categorize dolphin sounds similarly to how humans use words, sentences, or expressions in language. By recognizing recurring sound patterns and sequences, the model can assist researchers in uncovering hidden structures and meanings within the dolphins’ natural communication—a task that previously required significant human effort.
According to a blog post from Google, “Eventually, these patterns, augmented with synthetic sounds created by the researchers to refer to objects with which the dolphins like to play, may establish a shared vocabulary with the dolphins for interactive communication.”
DolphinGemma utilizes audio recording technology from Google’s Pixel phones, which allows for high-quality sound recordings of dolphin vocalizations. This technology can effectively filter out background noise, such as waves, boat engines, or underwater static, ensuring that the AI model receives clean audio data. Researchers emphasize that clear recordings are essential, as noisy data could hinder the AI’s ability to learn.
Google plans to release DolphinGemma as an open model this summer, enabling researchers worldwide to utilize and adapt it for their own studies. While the model has been trained primarily on Atlantic spotted dolphins, it has the potential to be fine-tuned for studying other species, such as bottlenose or spinner dolphins.
In the words of Google, “By providing tools like DolphinGemma, we hope to give researchers worldwide the tools to mine their own acoustic datasets, accelerate the search for patterns, and collectively deepen our understanding of these intelligent marine mammals.”
This groundbreaking project represents a significant step toward bridging the communication gap between humans and dolphins, opening new avenues for research and interaction with these fascinating creatures.
According to Google, the development of DolphinGemma could revolutionize our understanding of dolphin communication and enhance our ability to connect with them.

