Humans May Be Shockingly Close to Decoding the Language of Animals

By pagewriter

February 5, 2023

Earth Species Project (ESP) is an organization dedicated to decoding and ultimately communicating with non-human species, such as animals. CEO Katie Zacarian believes that artificial intelligence* can help make this goal a reality.

The first step in achieving this goal is to recognize patterns in animal language, and then use machine learning systems to analyze that data in order to understand it. In order to determine the potential meaning behind the patterns, scientists need to match the communication with corresponding behavior. ESP is utilizing this approach by studying birds, dolphins, primates, elephants, and honeybees.

The organization believes that marine mammals may be the key for the first breakthrough since much of their communication is done acoustically. If successful, this could lead to two-way conversations with animals…just like Doctor Dolittle! This could also open up new opportunities for considering our relationship with other species on earth. For example, should we ask whales to dive out of the way of boats when it changes their feeding habits? Or should boats change course?

These questions may have answers sooner than we think thanks to Earth Species Project’s pioneering research. By understanding what animals say, and being able to communicate back, we are entering a new era of relationship building between humans and animals which has huge implications for us all. It’s an exciting time for ESP as they continue their mission and bring us closer than ever before to understanding the languages of other species that inhabit our planet.

* During the early years of the Cold War, an array of underwater microphones monitoring for sounds of Russian submarines captured something otherworldly in the depths of the North Atlantic.

The haunting sounds came not from enemy craft, nor aliens, but humpback whales, a species that, at the time, humans had hunted almost to the brink of extinction. Years later, when environmentalist Roger Payne obtained the recordings from U.S. Navy storage and listened to them, he was deeply moved. The whale songs seemed to reveal majestic creatures that could communicate with one another in complex ways. If only the world could hear these sounds, Payne reasoned, the humpback whale might just be saved from extinction.

When Payne released the recordings in 1970 as the album Songs of the Humpback Whale, he was proved right. The album went multi-platinum. It was played at the U.N. general assembly, and it inspired Congress to pass the 1973 endangered species act. By 1986, commercial whaling was banned under international law. Global humpback whale populations have risen from a low of around 5,000 individuals in the 1960s to 135,000 today.

For Aza Raskin, the story is a sign of just how much can change when humanity experiences a moment of connection with the natural world. “It’s this powerful moment that can wake us up and power a movement,” Raskin tells TIME.

Raskin’s focus on animals comes from a very human place. A former Silicon Valley wunderkind himself, in 2006 he was first to invent the infinite scroll, the feature that became a mainstay of so many social media apps. He founded a streaming startup called Songza that was eventually acquired by Google. But Raskin gradually soured on the industry after realizing that technology, which had such capacity to influence human behavior for the better, was mostly being leveraged to keep people addicted to their devices and spending money on unnecessary products. In 2018, he co-founded the Center for Humane Technology with his friend and former Google engineer Tristan Harris, as part of an effort to ensure tech companies were shaped to benefit humanity, rather than the other way around. He is perhaps best known for, alongside scholar Renée DiResta, coining the phrase “freedom of speech is not freedom of reach.” The phrase became a helpful way for responsible technologists, lawmakers and political commentators to distinguish between the constitutional freedom for users to say whatever they like, and the privilege of having it amplified by social media megaphones.

Raskin is talking about whale song because he is also the co-founder and President of the Earth Species Project, an artificial intelligence (AI) nonprofit that is attempting to decode the speech of animals —from humpback whales, to great apes, to crows. The jury is out on whether it would ever truly be possible to accurately “translate” animal communication into anything resembling human language. Meaning is socially constructed, and animal societies are very different to ours.

Despite the seemingly insurmountable challenges the group is facing, the project has made at least some progress, including an experimental algorithm that can purportedly detect which individual in a noisy group of animals is "speaking."

A second algorithm reportedly can generate mimicked animal calls to "talk" directly to them.

"It is having the AI speak the language," Raskin told The Guardian, "even though we don’t know what it means yet."

AI-powered analysis of animal communication includes data sets of both bioacoustics, the recording of individual organisms, and ecoacoustics, the recording of entire ecosystems, according to experts. In October 2022, ESP published the first publicly-available benchmark for measuring the performance of machine learning algorithms in bioacoustics research. The system—known as BEANS (the BEnchmark of ANimal Sounds)—uses 10 datasets of various animal communications and establishes a baseline for machine learning classification and detection performance.

The datasets being studied in various efforts to decode animal communication include recordings from a range of species like birds, amphibians, primates, elephants and insects like honeybees. Communication from domesticated cats and dogs is being studied, too. Yet experts note that communication among cetaceans—whales, dolphins and other marine mammals—is especially promising.

“Cetaceans are particularly interesting because of their long history—34 million years as a socially learning, cultural species,” Zacarian explained. “And because—as light does not propagate well underwater—more of their communication is forced through the acoustic channel.”

Researchers maintain that bioacoustics and AI-powered analysis of animal communication can significantly advance ecological research and conservation efforts.

For instance, in 2021, researchers used audio recordings to identify a new species of blue whales in the Indian Ocean. “Each blue whale population has a distinct vocal signature, which can be used to distinguish and monitor different ‘acoustic populations’ or ‘acoustic groups”, the research team explained in a Nature article detailing the discovery.

Moreover, listening to ecosystems and decoding animal communication can help ecologists gauge the health of the natural environment, experts say. This includes, for instance, developing a better understanding of how distributive human activity like noise population or logging affects animal populations. In Costa Rica, for example, audio recordings were used recently to evaluate the development and health of reforested areas of the rainforest.

“By monitoring the sounds that are coming from nature, we can look for changes in social structure, transmission of cultural information or physiological stress,” Zacarian stated.

AI analysis of animal communication has also been used to help establish marine animal protection zones. Off the West Coast of the United States, for example, researchers have used AI to analyse marine communication recordings as well as shipping route data to create “mobile marine protected areas” and predict potential coalitions between animals and ships.

“Understanding what animals say is the first step to giving other species on the planet ‘a voice’ in conversations on our environment,” said Kay Firth-Butterfield, the World Economic Forum’s head of AI and machine learning.

“For example, should whales be asked to dive out of the way of boats when this fundamentally changes their feeding or should boats change course?”

There are ethical concerns that researchers are confronting, too. This includes, most notably, the possibility of doing harm by establishing two-way communication channels between humans and animals—or animals and machines.


“We’re not quite sure what the effect will be on the animals and whether they even want to engage in some conversations,” Bakker stated. “Maybe if they could talk to us, they would tell us to go away.”

Researchers are taking steps to address and mitigate the concerns about harm and animal exploitation. ESP, for instance, is working with its partners to develop a set of principles to guide its research and ensure it always supports conservation and animal wellbeing.

“We are not yet sure what all the real-world applications of this technology will be,” Zacarian stated. “But we think that unlocking an understanding of the communications of another species will be very significant as we work to change the way human beings see our role, and as we figure out how to co-exist on the planet.”

See: https://pagetraveler.com/humans-may-be-shockingly-close-to-decoding-the-language-of-animals/

See: https://bigthink.com/life/artificial-intelligence-animal-languages/

Understanding animal vocalisations has long been the subject of human fascination and study. Various primates give alarm calls that differ according to predator; dolphins address one another with signature whistles; and some songbirds can take elements of their calls and rearrange them to communicate different messages. But most experts stop short of calling it a language, as no animal communication meets all the criteria.

Until recently, decoding has mostly relied on painstaking observation. But interest has burgeoned in applying machine learning to deal with the huge amounts of data that can now be collected by modern animal-borne sensors. “People are starting to use it,” says Elodie Briefer, an associate professor at the University of Copenhagen who studies vocal communication in mammals and birds. “But we don’t really understand yet how much we can do.”

Briefer co-developed an algorithm that analyses pig grunts to tell whether the animal is experiencing a positive or negative emotion. Another, called DeepSqueak, judges whether rodents are in a stressed state based on their ultrasonic calls. A further initiative – Project CETI (which stands for the Cetacean Translation Initiative) – plans to use machine learning to translate the communication of sperm whales.

Yet ESP says its approach is different, because it is not focused on decoding the communication of one species, but all of them. While Raskin acknowledges there will be a higher likelihood of rich, symbolic communication among social animals – for example primates, whales and dolphins – the goal is to develop tools that could be applied to the entire animal kingdom. “We’re species agnostic,” says Raskin. “The tools we develop… can work across all of biology, from worms to whales.”

The “motivating intuition” for ESP, says Raskin, is work that has shown that machine learning can be used to translate between different, sometimes distant human languages – without the need for any prior knowledge.

This process starts with the development of an algorithm to represent words in a physical space. In this many-dimensional geometric representation, the distance and direction between points (words) describes how they meaningfully relate to each other (their semantic relationship). For example, “king” has a relationship to “man” with the same distance and direction that “woman’ has to “queen”. (The mapping is not done by knowing what the words mean but by looking, for example, at how often they occur near each other.)

It was later noticed that these “shapes” are similar for different languages. And then, in 2017, two groups of researchers working independently found a technique that made it possible to achieve translation by aligning the shapes. To get from English to Urdu, align their shapes and find the point in Urdu closest to the word’s point in English. “You can translate most words decently well,” says Raskin.

ESP’s aspiration is to create these kinds of representations of animal communication – working on both individual species and many species at once – and then explore questions such as whether there is overlap with the universal human shape. We don’t know how animals experience the world, says Raskin, but there are emotions, for example grief and joy, it seems some share with us and may well communicate about with others in their species. “I don’t know which will be the more incredible – the parts where the shapes overlap and we can directly communicate or translate, or the parts where we can’t.”

dolphins.jpeg
Dolphins use clicks, whistles and other sounds to communicate. But what are they saying? Photograph: ALesik/Getty Images/iStockphoto

He adds that animals don’t only communicate vocally. Bees, for example, let others know of a flower’s location via a “waggle dance”. There will be a need to translate across different modes of communication too.

The goal is “like going to the moon”, acknowledges Raskin, but the idea also isn’t to get there all at once. Rather, ESP’s roadmap involves solving a series of smaller problems necessary for the bigger picture to be realised. This should see the development of general tools that can help researchers trying to apply AI to unlock the secrets of species under study.

For example, ESP recently published a paper (and shared its code) on the so called “cocktail party problem” in animal communication, in which it is difficult to discern which individual in a group of the same animals is vocalising in a noisy social environment.

“To our knowledge, no one has done this end-to-end detangling [of animal sound] before,” says Raskin. The AI-based model developed by ESP, which was tried on dolphin signature whistles, macaque coo calls and bat vocalisations, worked best when the calls came from individuals that the model had been trained on; but with larger datasets it was able to disentangle mixtures of calls from animals not in the training cohort.
It could produce a step change in our ability to help the Hawaiian crow come back from the brink
Christian Rutz
Another project involves using AI to generate novel animal calls, with humpback whales as a test species. The novel calls – made by splitting vocalisations into micro-phonemes (distinct units of sound lasting a hundredth of a second) and using a language model to “speak” something whale-like – can then be played back to the animals to see how they respond. If the AI can identify what makes a random change versus a semantically meaningful one, it brings us closer to meaningful communication, explains Raskin. “It is having the AI speak the language, even though we don’t know what it means yet.”

HI crows.jpeg
Hawaiian crows are well known for their use of tools but are also believed to have a particularly complex set of vocalisations. Photograph: Minden Pictures/Alamy

A further project aims to develop an algorithm that ascertains how many call types a species has at its command by applying self-supervised machine learning, which does not require any labelling of data by human experts to learn patterns. In an early test case, it will mine audio recordings made by a team led by Christian Rutz, a professor of biology at the University of St Andrews, to produce an inventory of the vocal repertoire of the Hawaiian crow – a species that, Rutz discovered, has the ability to make and use tools for foraging and is believed to have a significantly more complex set of vocalisations than other crow species.

Rutz is particularly excited about the project’s conservation value. The Hawaiian crow is critically endangered and only exists in captivity, where it is being bred for reintroduction to the wild. It is hoped that, by taking recordings made at different times, it will be possible to track whether the species’s call repertoire is being eroded in captivity – specific alarm calls may have been lost, for example – which could have consequences for its reintroduction; that loss might be addressed with intervention. “It could produce a step change in our ability to help these birds come back from the brink,” says Rutz, adding that detecting and classifying the calls manually would be labour intensive and error prone.

Meanwhile, another project seeks to understand automatically the functional meanings of vocalisations. It is being pursued with the laboratory of Ari Friedlaender, a professor of ocean sciences at the University of California, Santa Cruz. The lab studies how wild marine mammals, which are difficult to observe directly, behave underwater and runs one of the world’s largest tagging programmes. Small electronic “biologging” devices attached to the animals capture their location, type of motion and even what they see (the devices can incorporate video cameras). The lab also has data from strategically placed sound recorders in the ocean.

ESP aims to first apply self-supervised machine learning to the tag data to automatically gauge what an animal is doing (for example whether it is feeding, resting, travelling or socialising) and then add the audio data to see whether functional meaning can be given to calls tied to that behaviour. (Playback experiments could then be used to validate any findings, along with calls that have been decoded previously.) This technique will be applied to humpback whale data initially – the lab has tagged several animals in the same group so it is possible to see how signals are given and received. Friedlaender says he was “hitting the ceiling” in terms of what currently available tools could tease out of the data. “Our hope is that the work ESP can do will provide new insights,” he says.

But not everyone is as gung ho about the power of AI to achieve such grand aims. Robert Seyfarth is a professor emeritus of psychology at University of Pennsylvania who has studied social behaviour and vocal communication in primates in their natural habitat for more than 40 years. While he believes machine learning can be useful for some problems, such as identifying an animal’s vocal repertoire, there are other areas, including the discovery of the meaning and function of vocalisations, where he is sceptical it will add much.

The problem, he explains, is that while many animals can have sophisticated, complex societies, they have a much smaller repertoire of sounds than humans. The result is that the exact same sound can be used to mean different things in different contexts and it is only by studying the context – who the individual calling is, how are they related to others, where they fall in the hierarchy, who they have interacted with – that meaning can hope to be established. “I just think these AI methods are insufficient,” says Seyfarth. “You’ve got to go out there and watch the animals.”

bee.jpeg
A map of animal communication will need to incorporate non-vocal phenomena such as the “waggle dances” of honey bees. Photograph: Ben Birchall/PA

There is also doubt about the concept – that the shape of animal communication will overlap in a meaningful way with human communication. Applying computer-based analyses to human language, with which we are so intimately familiar, is one thing, says Seyfarth. But it can be “quite different” doing it to other species. “It is an exciting idea, but it is a big stretch,” says Kevin Coffey, a neuroscientist at the University of Washington who co-created the DeepSqueak algorithm.

Raskin acknowledges that AI alone may not be enough to unlock communication with other species. But he refers to research that has shown many species communicate in ways “more complex than humans have ever imagined”. The stumbling blocks have been our ability to gather sufficient data and analyse it at scale, and our own limited perception. “These are the tools that let us take off the human glasses and understand entire communication systems,” he says.

See: https://www.theguardian.com/science...telligence-really-help-us-talk-to-the-animals

Animals have developed their own ways of communication over millions of years, while human speech—and, therefore, language—couldn’t have evolved until the arrival of anatomically modern Homo sapiens about 200,000 years ago (or, per a fossil discovery from 2017, about 300,000 years ago). This line of thinking became known as laryngeal descent theory**, or LDT.

A review paper published in 2019 in Science Advances
(https://www.science.org/doi/10.1126/sciadv.aaw3916)
, aims to tear down the LDT completely. Its authors argue that the anatomical ingredients for speech were present in our ancestors much earlier than 200,000 years ago. They propose that the necessary equipment—specifically, the throat shape and motor control that produce distinguishable vowels—has been around as long as 27 million! years, when humans and Old World monkeys (baboons, mandrills, and the like) last shared a common ancestor.

In any case, decoding and ultimately communicating with non-human species is extremely difficult and it may need to wait until the advent of the quantum computer for us to be able to have a chat with our dog, cat or horse, let alone the honey bee or a blue whale.
Hartmann352

** Laryngeal descent theory - refers to a movement of the larynx away from the oral and nasal cavities in humans or other mammals, either temporarily during vocalization (dynamic descent) or permanently during development (permanent descent).

It has been known since the nineteenth century that adult humans are unusual in having a descended larynx. In most mammals, the resting position of the larynx is directly beneath the palate, at the back of the oral cavity, and the epiglottis (a flap of cartilage at the top of the larynx) can be inserted into the nasal passage to form a sealed respiratory passage from the nostrils to the lungs. In humans, in contrast, the larynx descends away from the palate during infancy, and adults can no longer engage the larynx into the nasal passages. This trait was once thought to be unique to humans and to play a central role in our ability to speak.

See: https://link.springer.com/referenceworkentry/10.1007/978-3-319-16999-6_3348-1
 
Last edited:
  • Like
Reactions: JAXX1 and write4u
The above mentions "an array of underwater microphones." Let me add some additional information:

SonobuoyGoingIntoAircraft_250.jpg

Sonobuoy being loaded into aircraft. (U.S. Navy Imagery used on this website without endorsement expressed or implied.) sonobuoy

Navies around the world need to perform a variety of missions from Maritime Security Operations in territorial waters to Power Projection including interventions on external theaters of operations, and from coastal environments to the deep sea.

Maritime Patrol Aircraft (MPA) have been used intensively since WW2 to detect and prosecute submarines. These platforms are extremely useful as they can very rapidly rally areas of operational interest and deploy their sonobuoys at sea. They operate covertly in passive mode or creating a surprise effect for a submarine. The latest technological advances have introduced spectacular breakthroughs in this field through the variety of buoys used and the performance of the processings developed.

Leveraging state-of-the-art developments in sonobuoy processing performed for the UK Royal Navy and the French Navy in the framework of the “AW101 MERLIN CSP” and “ATLANTIQUE 2 STAN” projects respectively, Thales offers the BlueTracker Sonobuoy Processing System product range.



ATL2_dms_ISR_2022_websitethumbnail700x465.jpg

© Thales

Thales offers two variants of the Bluetracker product as follows:
“BlueTracker MK1”: dedicated to multi-purpose aircraft, dimensioned to process up to 16 buoys simultaneously.

“BlueTracker MK2”: dedicated to aircraft specialized in ASW, dimensioned to process up to 64 buoys simultaneously, in particular SONOFLASH.

Beyond the various types of sonobuoys that are available on the market, BlueTracker can process the new SONOFLASH Active/Passive Low Frequency sonobuoy developed by Thales. SONOFLASH features impressive detection capabilities that can be multiplied when used in multistatic mode with other collaborative buoys or with the FLASH dipping sonar as their operating frequency bands are consistent.

In a world where Navies are facing growing and sometimes unexpected threats and challenges, Anti-Submarine Warfare is resurging as a key discipline for the 21st century and Thales BlueTracker product range offers the best solutions to ensure Navies effectiveness and safety at sea.

See: https://www.thalesgroup.com/en/mark...water-warfare/bluescan/sonobuoys-and-sonobuoy

66 Years of Undersea Surveillance

By Captain Brian Taddiken, U.S. Navy and Lieutenant Kirsten Krock, U.S. Navy
February 2021

Naval History Magazine

Just over 66 years ago, one of the Navy’s most secretive communities began. Its members went by the code word SOSUS, which means “Sound Surveillance System.” A new front line in the Cold War, they had one mission: FIND SUBMARINES.

Lack of knowledge and information concerning oceanographic and acoustic conditions off the continental coasts hampered the U.S. Navy’s efforts against the submarine threat during World War II. It was apparent the German Navy had better information and a better understanding of how to use the Atlantic Ocean. Consequently, since the war, the U.S. Navy has maintained a continuous program of oceanographic surveys designed to provide more detailed information on currents, temperature, salinity, and other factors that comprise the oceanic environment and affect the transmission of sound in saltwater. The U.S. Navy was determined never to again lag behind others in its knowledge of this vital battlespace.

In early 1950, on the recommendation of the Committee on Undersea Warfare to the Assistant Chief of Naval Operations, Project Jezebel was born—a long-range program dedicated to the detection, classification, and localization of enemy submarines.

A meeting between admirals, Bell Telephone Laboratories, and a Massachusetts Institute of Technology representative resulted in Project Hartwell, a research group authorized to study the long-range aspects of antisubmarine warfare. Experiments conducted during the spring of 1950 revealed that submarines radiate strong sounds in the low-frequency spectrum. Project Hartwell discovered significant details of low-frequency sound, which would aid in developing a method for detection of submarines at great distances.

lodar.jpeg
Low-Frequency (LOFAR) Paper Gram (Courtesy of the Authors)

The Navy’s decision to pursue and further fund the research of low-frequency radiated noise resulted in the creation of Project Jezebel. In December 1950, the Office of Naval Research awarded a contract to the Western Electric Company (WECO) to continue research in detecting and identifying the low-frequency sounds radiated by submarines and proceed with development work aimed at the manufacture and installation of equipment for detecting and classifying submarines at long ranges.

Under the supervision of Navy Lieutenant Joe Kelly, Project Jezebel received authorization for six experimental stations. In 1951, a six-element array was installed at Eleuthera Island in the Bahamas. Lieutenant Kelly later became to be known as “the Father of SOSUS.” The experimental stations would be referred to as Caesar stations. The six original Caesar stations had grown to nine, tasked with providing surveillance off the U.S. East Coast.

eleuthera lab.jpeg
Eleuthera Laboratory test array (Courtesy of the Authors)

To operate the equipment in these stations, the Navy developed Sound Search Course 572, taught out of Key West, Florida. The course was classified; even the service records of the sailors who originally manned these stations were classified. When the first sonar operators arrived at their stations, they had no experience, no reference material, no publication library, no experts to call on, and no previous low-frequency (LOFAR) grams to compare. One of the original analysts said, “We had nothing, we knew nothing, we did not know what made the lines, spacing, etc., how they got to the paper, etc. We were taught harmonics in reference to music, we had to think about every line, what could be making it? And develop a theory about its origin.”

In 1954 Navy, WECO and the Seabees were authorized to develop and build ten additional Caesar stations, three Atlantic, seven Pacific. On 18 September 1954 Naval Facility (NAVFAC) Ramey Air Force Base Puerto Rico was the first commissioned; this became known as the birth of SOSUS. The worldwide expansion of U.S. Navy SOSUS coverage had begun.

Caesar stations were windowless buildings with rows of recorders displaying the sound received by hydrophones located hundreds to thousands of miles away on the ocean floor. Each recorder displayed a sound from a separate beam, correlating to a direction. Analysts would walk up and down the rows of machines analyzing the signals on each recorder; this became known as “walking the beams.” Raw data received by the hydrophones fed through processors, converted to varying electrical current and displayed on recorders referred to as gram writers. Each recorder was equipped with large spools of paper that would scroll bottom to top on a metal plate as styli scrolled in unison horizontally across chemically coated paper, burning the paper at varying intensities depending on how loud the detected noise was, thus displaying the acoustics received by the hydrophones. Even though analysts no longer physically perform these functions, the term “walking the beams” is still used on watch floors today.

navfac.jpeg
1972 – NAVFAC Centerville Beach –“Gram Writers” (U.S. Navy Museum)

The 1960s witnessed the development and growth of undersea surveillance. New NAVFACs opened as the system had success after success. To manage the NAVFACs located throughout the world required two separate commanders—Commander, Oceanographic Atlantic, and Commander, Oceanographic Pacific. Some of the decade’s successes include: tracking the ballistic-missile submarine USS George Washington (SSBN-598) from the waters off the continental United States to the United Kingdom; tracking multiple Soviet diesel and nuclear submarines; and the first positive correlation of a SOSUS contact and fixed-wing patrol contact, made from a Soviet Foxtrot-class submarine during the Cuban Missile Crisis.

thresher.jpeg
USS Thresher (SSN-593) (Naval History and Heritage Command)

SOSUS was designed as an early-warning surveillance system; however, a tragic naval accident highlighted yet another application to which SOSUS became useful. In 1963, SOSUS analysts detected the sinking of the nuclear-powered attack submarine USS Thresher (SSN-593). Analysis of the LOFARgrams pinpointed the exact location of the incident and the wreckage. Later in the decade, SOSUS played similar roles after two more equally horrific, incidents occurred with the Skipjack-class submarine USS Scorpion (SSN-589) and Soviet K-129. Both suffered catastrophic casualties and sank.

With the 1970s came technological upgrades in both shore processing and underwater systems. Major advancements in cable technology stimulated the consolidation of NAVFACs into “super” NAVFACs. In addition, U.S. and Canadian forces combined operations at NAVFAC Argentia, Newfoundland. While the smaller, older facilities decommissioned, the first super NAVFAC commissioned in Brawdy, Wales. Super NAVFACs ultimately became Naval Ocean Processing Facilities (NOPFs). These facilities were manned by sonar operators and electronic technicians until ocean systems technician analysts and ocean systems technician maintainers replaced both rates.

In 1972, the concept of towing a long line of hydrophones from a surface ship was proving to be a highly effective method of detecting submarines at long distances. The Navy recognized its value both as a mobile augmentation to the SOSUS fixed arrays and as a means of extending sonar coverage of combatant ships. Ocean surveillance ships (T-AGOS) were specifically designed to tow long-line hydrophone arrays at slow speeds, in areas where fixed arrays lacked acoustic coverage.

By the 1980s SOSUS had grown to a community of thousands of sailors and multiple fixed and mobile systems; SOSUS and the surveillance towed array sensor system (SURTASS) consolidated under a new name, the Integrated Undersea Surveillance System (IUSS). The system saw widespread consolidation of shore assets, a new test array named the “Fixed Distribution System,” arrival of the first operational SURTASS ships, and delivery of the cable repair ship USNS Zeus (T-ARC-7).

The first SURTASS ship, the USNS Stalwart (T-AGOS-1), was commissioned in 1984, followed by the Contender (T-AGOS-2) and Vindicator (T-AGOS-3). Over the next seven years, 18 SURTASS ships were commissioned, necessitating the need for a ship and array repair facility; thus, IUSS Operations Support Center was stood up in Little Creek, Virginia.

As technology rapidly advanced, facilities were consolidated, while coverage remained in place. The ability to transmit acoustic data to one centralized facility reduced manning requirements and led to smaller and older facilities closing. Naval Ocean Processing Facility (NOPF) Dam Neck and NOPF Whidbey Island both commissioned in the ’80s; of the 31 NAVFACs/NOPFs, they are the only two still in operation.

In 1991, as the Cold War came to a close with the dissolution of the Soviet Union, the IUSS mission was declassified after 41 years of secrecy. The SURTASS fleet took a step forward with the commissioning of the USNS Victorious (T-AGOS-19), the first Small Waterplane Area Twin-Hull (SWATH) SURTASS vessel, in 1992. The SWATH hull gave the vessel a high degree of stability in high seas while conducting operations at slow speeds. Civilian mariners operate SURTASS, however; they do not conduct acoustic analysis. Acoustics are analyzed on board or transmitted to a shore facility for analysis.

USNS_Impeccable_T-AGOS-23.jpeg
SWATH-Hull SURTASS USNS Impeccable (T-AGOS-23) (U.S. Navy)

In 1994, the Commanders Undersea Surveillance, Atlantic and Pacific, decommissioned and consolidated into one central command—Commander, Undersea Surveillance, located in Norfolk, Virginia, later relocating to Dam Neck, Virginia.

With the realization that submarines were becoming quieter and detection ranges decreasing, the Navy looked to active sonar. They approved research of new system called low-frequency active (LFA). Passive sonar relies on the source being loud enough to be detected by a receiver. Active sonar transmits its own signal; the transmission travels to an object’s surface, bounces off, and returns to the receiver. LFA uses a much lower frequency than standard active sonars, so it travels greater distances. LFA testing would continue over the next 12 years. In 2003, the ocean surveillance ship MV Cory Chouest would become the first LFA operational asset.

The OT rating was disestablished in 1997 and sonar technician surface rating filled the majority of the IUSS billets and duties. Submarine sonar technicians and aviation warfare systems operators filled in the remaining billets. With the rating conversion, the throughput for IUSS formal training dwindled, culminating the disestablishment of the formal training pipeline at the Submarine Learning Center in 2006.

With the emerging Chinese threat in the Pacific and its large search area, all SURTASS vessels were transferred to the Pacific fleet. USNS Impeccable (T-AGOS-23) and Able (T-AGOS-20) transferred from the Atlantic fleet to Pacific fleet in 2003. By 2015, three of four Victorious-class vessels were outfitted with a compact version of LFA (CLFA,) and the Impeccable is equipped with LFA. Each SURTASS vessel is outfitted with the TL-29A twin-line array, a variation on the submarine TB-29 array.

SURTASS vessels are unique platforms as their sailors (MILDET) only have access to the sonar equipment while on board in a forward-deployed naval force capacity. The MILDETs live and train out of NOPF Whidbey Island, Washington, deploy to meet the ship, and immediately deploy for three to five months of underway operations. With no place to train when not forward-deployed, the Pacific Submarine Force invested in the first high-fidelity trainer installed at Whidbey Island in 2012. Over the years, updates have included two more trainers (three in total), control monitoring stations, and improved simulation capability.

While SOSUS arrays’ primary function is detecting submarines, they also have environmental applications, such as tracking whales and their migration patterns, and seismic events. Alternate or dual-use partnerships exist with a number of agencies and institutions. The National Oceanic and Atmospheric Administration (NOAA) Vents program at its Pacific Marine Environmental Laboratory was granted access to sanitized SOSUS data at Whidbey Island in October 1990. This study combined raw analog data from specific hydrophones with NOAA systems for continuous monitoring of the northeast Pacific Ocean for low-level seismic activity and detection of volcanic activity along the northeast Pacific spreading centers.

A push to reinvigorate IUSS-specific training occurred in 2014. Submarine Learning Center re-established IUSS pipeline training with submarine learning facility detachments at schoolhouses co-located with NOPF Whidbey Island and NOPF Dam Neck. With 11 formal courses being taught and three new courses in development, IUSS training is healthy and will continue to meet the mission.

Development of future capabilities are moving forward swiftly with a new SURTASS vessel (TAGOS X) under development. Also, a deployable version of the SURTASS system, called Expeditionary SURTASS (SURTASS E) has deployed twice. SURTASS E provides a passive surveillance system in containerized boxes. Modular in design, the system can be installed on the back of nearly any flat-decked ship providing tremendous flexibility for when and where to operate. Additionally, numerous small systems, easily deployed from any manner of ships are under development and comprise the Deployable Family of Systems (DSS). The Maritime Surveillance Systems Program Office is developing DSS that are unmanned deployable systems for rapid installation worldwide. These systems provide asymmetrical response to proliferation of quiet submarines. DSS includes deep-water passive, deep-water active, and mobile passive and active systems. Some current projects include gliders, the Transformational Reliable Acoustic Path System, the Deep-Water Active Distributed System, and sensor-hosted autonomous remote crafts.

Lastly, efforts are underway to upgrade and replace the existing underwater cables to bring them up to the standards needed to detect quiet, modern submarines.

The new millennium arrived and ushered in a new era of challenges for IUSS. Demonstrating the adaptability and flexibility that has characterized the community over the last 60-plus years, IUSS continues to meet the nation’s maritime surveillance requirements.

CAPT Taddiken, originally from Tacoma, Washington, graduated with merit from the U.S. Naval Academy in 1996 with a bachelor of science degree in systems engineering. He earned a master’s degree in national security and strategic studies from the U.S. Naval War College and a master’s in engineering management from Old Dominion University, a Diploma de Mestre em Ciências Navais from the Brazilian Naval War College, and a master's in business administration from the Federal University of Brazil in Rio de Janeiro. He is currently serving as the IUSS Commodore at Commander Undersea Surveillance.

See: https://www.usni.org/magazines/naval-history-magazine/2021/february/66-years-undersea-surveillance

It is interesting to note the use of Sonobuoys, SOSUS and SURTASS to monitor the sounds found in the deep oceans. Today it's exciting to see that the underwater cables supporting these deep sea listening systems are being upgraded.

Today, China is building an underwater Great Wall that reaches out to the island chain that stretches from Japan to Taiwan to the Philippines to Indonesia, composed of its own sound surveillance system (SOSUS) nets and with anti-submarine warfare (ASW) forces designed to deny the area above all to the US 7th (Pacific) Fleet, but also to other allied navies such as the Japanese Maritime Self Defense Force and the Royal Australian Navy.
Hartmann352
 
  • Like
Reactions: Miles
Feb 22, 2023
5
0
30
Visit site
By pagewriter

February 5, 2023

Earth Species Project (ESP) is an organization dedicated to decoding and ultimately communicating with non-human species, such as animals. CEO Katie Zacarian believes that artificial intelligence* can help make this goal a reality.

The first step in achieving this goal is to recognize patterns in animal language, and then use machine learning systems to analyze that data in order to understand it. In order to determine the potential meaning behind the patterns, scientists need to match the communication with corresponding behavior. ESP is utilizing this approach by studying birds, dolphins, primates, elephants, and honeybees.

The organization believes that marine mammals may be the key for the first breakthrough since much of their communication is done acoustically. If successful, this could lead to two-way conversations with animals…just like Doctor Dolittle! This could also open up new opportunities for considering our relationship with other species on earth. For example, should we ask whales to dive out of the way of boats when it changes their feeding habits? Or should boats change course?

These questions may have answers sooner than we think thanks to Earth Species Project’s pioneering research. By understanding what animals say, and being able to communicate back, we are entering a new era of relationship building between humans and animals which has huge implications for us all. It’s an exciting time for ESP as they continue their mission and bring us closer than ever before to understanding the languages of other species that inhabit our planet.

* During the early years of the Cold War, an array of underwater microphones monitoring for sounds of Russian submarines captured something otherworldly in the depths of the North Atlantic.

The haunting sounds came not from enemy craft, nor aliens, but humpback whales, a species that, at the time, humans had hunted almost to the brink of extinction. Years later, when environmentalist Roger Payne obtained the recordings from U.S. Navy storage and listened to them, he was deeply moved. The whale songs seemed to reveal majestic creatures that could communicate with one another in complex ways. If only the world could hear these sounds, Payne reasoned, the humpback whale might just be saved from extinction.

When Payne released the recordings in 1970 as the album Songs of the Humpback Whale, he was proved right. The album went multi-platinum. It was played at the U.N. general assembly, and it inspired Congress to pass the 1973 endangered species act. By 1986, commercial whaling was banned under international law. Global humpback whale populations have risen from a low of around 5,000 individuals in the 1960s to 135,000 today.

For Aza Raskin, the story is a sign of just how much can change when humanity experiences a moment of connection with the natural world. “It’s this powerful moment that can wake us up and power a movement,” Raskin tells TIME.

Raskin’s focus on animals comes from a very human place. A former Silicon Valley wunderkind himself, in 2006 he was first to invent the infinite scroll, the feature that became a mainstay of so many social media apps. He founded a streaming startup called Songza that was eventually acquired by Google. But Raskin gradually soured on the industry after realizing that technology, which had such capacity to influence human behavior for the better, was mostly being leveraged to keep people addicted to their devices and spending money on unnecessary products. In 2018, he co-founded the Center for Humane Technology with his friend and former Google engineer Tristan Harris, as part of an effort to ensure tech companies were shaped to benefit humanity, rather than the other way around. He is perhaps best known for, alongside scholar Renée DiResta, coining the phrase “freedom of speech is not freedom of reach.” The phrase became a helpful way for responsible technologists, lawmakers and political commentators to distinguish between the constitutional freedom for users to say whatever they like, and the privilege of having it amplified by social media megaphones.

Raskin is talking about whale song because he is also the co-founder and President of the Earth Species Project, an artificial intelligence (AI) nonprofit that is attempting to decode the speech of animals —from humpback whales, to great apes, to crows. The jury is out on whether it would ever truly be possible to accurately “translate” animal communication into anything resembling human language. Meaning is socially constructed, and animal societies are very different to ours.

Despite the seemingly insurmountable challenges the group is facing, the project has made at least some progress, including an experimental algorithm that can purportedly detect which individual in a noisy group of animals is "speaking."

A second algorithm reportedly can generate mimicked animal calls to "talk" directly to them.

"It is having the AI speak the language," Raskin told The Guardian, "even though we don’t know what it means yet."

AI-powered analysis of animal communication includes data sets of both bioacoustics, the recording of individual organisms, and ecoacoustics, the recording of entire ecosystems, according to experts. In October 2022, ESP published the first publicly-available benchmark for measuring the performance of machine learning algorithms in bioacoustics research. The system—known as BEANS (the BEnchmark of ANimal Sounds)—uses 10 datasets of various animal communications and establishes a baseline for machine learning classification and detection performance.

The datasets being studied in various efforts to decode animal communication include recordings from a range of species like birds, amphibians, primates, elephants and insects like honeybees. Communication from domesticated cats and dogs is being studied, too. Yet experts note that communication among cetaceans—whales, dolphins and other marine mammals—is especially promising.

“Cetaceans are particularly interesting because of their long history—34 million years as a socially learning, cultural species,” Zacarian explained. “And because—as light does not propagate well underwater—more of their communication is forced through the acoustic channel.”

Researchers maintain that bioacoustics and AI-powered analysis of animal communication can significantly advance ecological research and conservation efforts.

For instance, in 2021, researchers used audio recordings to identify a new species of blue whales in the Indian Ocean. “Each blue whale population has a distinct vocal signature, which can be used to distinguish and monitor different ‘acoustic populations’ or ‘acoustic groups”, the research team explained in a Nature article detailing the discovery.

Moreover, listening to ecosystems and decoding animal communication can help ecologists gauge the health of the natural environment, experts say. This includes, for instance, developing a better understanding of how distributive human activity like noise population or logging affects animal populations. In Costa Rica, for example, audio recordings were used recently to evaluate the development and health of reforested areas of the rainforest.

“By monitoring the sounds that are coming from nature, we can look for changes in social structure, transmission of cultural information or physiological stress,” Zacarian stated.

AI analysis of animal communication has also been used to help establish marine animal protection zones. Off the West Coast of the United States, for example, researchers have used AI to analyse marine communication recordings as well as shipping route data to create “mobile marine protected areas” and predict potential coalitions between animals and ships.

“Understanding what animals say is the first step to giving other species on the planet ‘a voice’ in conversations on our environment,” said Kay Firth-Butterfield, the World Economic Forum’s head of AI and machine learning.

“For example, should whales be asked to dive out of the way of boats when this fundamentally changes their feeding or should boats change course?”

There are ethical concerns that researchers are confronting, too. This includes, most notably, the possibility of doing harm by establishing two-way communication channels between humans and animals—or animals and machines.


“We’re not quite sure what the effect will be on the animals and whether they even want to engage in some conversations,” Bakker stated. “Maybe if they could talk to us, they would tell us to go away.”

Researchers are taking steps to address and mitigate the concerns about harm and animal exploitation. ESP, for instance, is working with its partners to develop a set of principles to guide its research and ensure it always supports conservation and animal wellbeing.

“We are not yet sure what all the real-world applications of this technology will be,” Zacarian stated. “But we think that unlocking an understanding of the communications of another species will be very significant as we work to change the way human beings see our role, and as we figure out how to co-exist on the planet.”

See: https://pagetraveler.com/humans-may-be-shockingly-close-to-decoding-the-language-of-animals/

See: https://bigthink.com/life/artificial-intelligence-animal-languages/

Understanding animal vocalisations has long been the subject of human fascination and study. Various primates give alarm calls that differ according to predator; dolphins address one another with signature whistles; and some songbirds can take elements of their calls and rearrange them to communicate different messages. But most experts stop short of calling it a language, as no animal communication meets all the criteria.

Until recently, decoding has mostly relied on painstaking observation. But interest has burgeoned in applying machine learning to deal with the huge amounts of data that can now be collected by modern animal-borne sensors. “People are starting to use it,” says Elodie Briefer, an associate professor at the University of Copenhagen who studies vocal communication in mammals and birds. “But we don’t really understand yet how much we can do.”

Briefer co-developed an algorithm that analyses pig grunts to tell whether the animal is experiencing a positive or negative emotion. Another, called DeepSqueak, judges whether rodents are in a stressed state based on their ultrasonic calls. A further initiative – Project CETI (which stands for the Cetacean Translation Initiative) – plans to use machine learning to translate the communication of sperm whales.

Yet ESP says its approach is different, because it is not focused on decoding the communication of one species, but all of them. While Raskin acknowledges there will be a higher likelihood of rich, symbolic communication among social animals – for example primates, whales and dolphins – the goal is to develop tools that could be applied to the entire animal kingdom. “We’re species agnostic,” says Raskin. “The tools we develop… can work across all of biology, from worms to whales.”

The “motivating intuition” for ESP, says Raskin, is work that has shown that machine learning can be used to translate between different, sometimes distant human languages – without the need for any prior knowledge.

This process starts with the development of an algorithm to represent words in a physical space. In this many-dimensional geometric representation, the distance and direction between points (words) describes how they meaningfully relate to each other (their semantic relationship). For example, “king” has a relationship to “man” with the same distance and direction that “woman’ has to “queen”. (The mapping is not done by knowing what the words mean but by looking, for example, at how often they occur near each other.)

It was later noticed that these “shapes” are similar for different languages. And then, in 2017, two groups of researchers working independently found a technique that made it possible to achieve translation by aligning the shapes. To get from English to Urdu, align their shapes and find the point in Urdu closest to the word’s point in English. “You can translate most words decently well,” says Raskin.

ESP’s aspiration is to create these kinds of representations of animal communication – working on both individual species and many species at once – and then explore questions such as whether there is overlap with the universal human shape. We don’t know how animals experience the world, says Raskin, but there are emotions, for example grief and joy, it seems some share with us and may well communicate about with others in their species. “I don’t know which will be the more incredible – the parts where the shapes overlap and we can directly communicate or translate, or the parts where we can’t.”

View attachment 2593
Dolphins use clicks, whistles and other sounds to communicate. But what are they saying? Photograph: ALesik/Getty Images/iStockphoto

He adds that animals don’t only communicate vocally. Bees, for example, let others know of a flower’s location via a “waggle dance”. There will be a need to translate across different modes of communication too.

The goal is “like going to the moon”, acknowledges Raskin, but the idea also isn’t to get there all at once. Rather, ESP’s roadmap involves solving a series of smaller problems necessary for the bigger picture to be realised. This should see the development of general tools that can help researchers trying to apply AI to unlock the secrets of species under study.

For example, ESP recently published a paper (and shared its code) on the so called “cocktail party problem” in animal communication, in which it is difficult to discern which individual in a group of the same animals is vocalising in a noisy social environment.

“To our knowledge, no one has done this end-to-end detangling [of animal sound] before,” says Raskin. The AI-based model developed by ESP, which was tried on dolphin signature whistles, macaque coo calls and bat vocalisations, worked best when the calls came from individuals that the model had been trained on; but with larger datasets it was able to disentangle mixtures of calls from animals not in the training cohort.

Christian Rutz
Another project involves using AI to generate novel animal calls, with humpback whales as a test species. The novel calls – made by splitting vocalisations into micro-phonemes (distinct units of sound lasting a hundredth of a second) and using a language model to “speak” something whale-like – can then be played back to the animals to see how they respond. If the AI can identify what makes a random change versus a semantically meaningful one, it brings us closer to meaningful communication, explains Raskin. “It is having the AI speak the language, even though we don’t know what it means yet.”

View attachment 2594
Hawaiian crows are well known for their use of tools but are also believed to have a particularly complex set of vocalisations. Photograph: Minden Pictures/Alamy

A further project aims to develop an algorithm that ascertains how many call types a species has at its command by applying self-supervised machine learning, which does not require any labelling of data by human experts to learn patterns. In an early test case, it will mine audio recordings made by a team led by Christian Rutz, a professor of biology at the University of St Andrews, to produce an inventory of the vocal repertoire of the Hawaiian crow – a species that, Rutz discovered, has the ability to make and use tools for foraging and is believed to have a significantly more complex set of vocalisations than other crow species.

Rutz is particularly excited about the project’s conservation value. The Hawaiian crow is critically endangered and only exists in captivity, where it is being bred for reintroduction to the wild. It is hoped that, by taking recordings made at different times, it will be possible to track whether the species’s call repertoire is being eroded in captivity – specific alarm calls may have been lost, for example – which could have consequences for its reintroduction; that loss might be addressed with intervention. “It could produce a step change in our ability to help these birds come back from the brink,” says Rutz, adding that detecting and classifying the calls manually would be labour intensive and error prone.

Meanwhile, another project seeks to understand automatically the functional meanings of vocalisations. It is being pursued with the laboratory of Ari Friedlaender, a professor of ocean sciences at the University of California, Santa Cruz. The lab studies how wild marine mammals, which are difficult to observe directly, behave underwater and runs one of the world’s largest tagging programmes. Small electronic “biologging” devices attached to the animals capture their location, type of motion and even what they see (the devices can incorporate video cameras). The lab also has data from strategically placed sound recorders in the ocean.

ESP aims to first apply self-supervised machine learning to the tag data to automatically gauge what an animal is doing (for example whether it is feeding, resting, travelling or socialising) and then add the audio data to see whether functional meaning can be given to calls tied to that behaviour. (Playback experiments could then be used to validate any findings, along with calls that have been decoded previously.) This technique will be applied to humpback whale data initially – the lab has tagged several animals in the same group so it is possible to see how signals are given and received. Friedlaender says he was “hitting the ceiling” in terms of what currently available tools could tease out of the data. “Our hope is that the work ESP can do will provide new insights,” he says.

But not everyone is as gung ho about the power of AI to achieve such grand aims. Robert Seyfarth is a professor emeritus of psychology at University of Pennsylvania who has studied social behaviour and vocal communication in primates in their natural habitat for more than 40 years. While he believes machine learning can be useful for some problems, such as identifying an animal’s vocal repertoire, there are other areas, including the discovery of the meaning and function of vocalisations, where he is sceptical it will add much.

The problem, he explains, is that while many animals can have sophisticated, complex societies, they have a much smaller repertoire of sounds than humans. The result is that the exact same sound can be used to mean different things in different contexts and it is only by studying the context – who the individual calling is, how are they related to others, where they fall in the hierarchy, who they have interacted with – that meaning can hope to be established. “I just think these AI methods are insufficient,” says Seyfarth. “You’ve got to go out there and watch the animals.”

View attachment 2595
A map of animal communication will need to incorporate non-vocal phenomena such as the “waggle dances” of honey bees. Photograph: Ben Birchall/PA

There is also doubt about the concept – that the shape of animal communication will overlap in a meaningful way with human communication. Applying computer-based analyses to human language, with which we are so intimately familiar, is one thing, says Seyfarth. But it can be “quite different” doing it to other species. “It is an exciting idea, but it is a big stretch,” says Kevin Coffey, a neuroscientist at the University of Washington who co-created the DeepSqueak algorithm.

Raskin acknowledges that AI alone may not be enough to unlock communication with other species. But he refers to research that has shown many species communicate in ways “more complex than humans have ever imagined”. The stumbling blocks have been our ability to gather sufficient data and analyse it at scale, and our own limited perception. “These are the tools that let us take off the human glasses and understand entire communication systems,” he says.

See: https://www.theguardian.com/science...telligence-really-help-us-talk-to-the-animals

Animals have developed their own ways of communication over millions of years, while human speech—and, therefore, language—couldn’t have evolved until the arrival of anatomically modern Homo sapiens about 200,000 years ago (or, per a fossil discovery from 2017, about 300,000 years ago). This line of thinking became known as laryngeal descent theory**, or LDT.

A review paper published in 2019 in Science Advances
(https://www.science.org/doi/10.1126/sciadv.aaw3916)
, aims to tear down the LDT completely. Its authors argue that the anatomical ingredients for speech were present in our ancestors much earlier than 200,000 years ago. They propose that the necessary equipment—specifically, the throat shape and motor control that produce distinguishable vowels—has been around as long as 27 million! years, when humans and Old World monkeys (baboons, mandrills, and the like) last shared a common ancestor.

In any case, decoding and ultimately communicating with non-human species is extremely difficult and it may need to wait until the advent of the quantum computer for us to be able to have a chat with our dog, cat or horse, let alone the honey bee or a blue whale.
Hartmann352

** Laryngeal descent theory - refers to a movement of the larynx away from the oral and nasal cavities in humans or other mammals, either temporarily during vocalization (dynamic descent) or permanently during development (permanent descent).

It has been known since the nineteenth century that adult humans are unusual in having a descended larynx. In most mammals, the resting position of the larynx is directly beneath the palate, at the back of the oral cavity, and the epiglottis (a flap of cartilage at the top of the larynx) can be inserted into the nasal passage to form a sealed respiratory passage from the nostrils to the lungs. In humans, in contrast, the larynx descends away from the palate during infancy, and adults can no longer engage the larynx into the nasal passages. This trait was once thought to be unique to humans and to play a central role in our ability to speak.

See: https://link.springer.com/referenceworkentry/10.1007/978-3-319-16999-6_3348-1
People will believe anything that sounds exciting or cool or new or why did it take so long type crap. Unbelievable how gullible the human population is. Suckers straight suckers. All their doing is making money of u guys that feed into their hype. The more views they get the more advertisers will pay for ads posted. They'll say whatever it takes to get peoples attention. Like the animals are telling each other hey let's go for a run hey let's get something to drink. What their really saying is don't kill us for food that's not what we're for in the first place. Yes I eat meat but at least I know that we're not supposed to. Hence why beef red meat is on the carcinogenic list put out by the WHO I believe. Trust ur science right? Lol unbelievable
 
Apparently you haven't heard that even bacteria talk to each other.

Look up "quorum sensing", the evolutionary beginning of all intraspecies and interspecies communication.
 
Feb 22, 2023
5
0
30
Visit site
Apparently you haven't heard that even bacteria talk to each other.

Look up "quorum sensing", the evolutionary beginning of all intraspecies and interspecies communication.
They communicate like everything does the plants do the trees do the mushrooms the bugs and obviously bacteria since that's what everything is made of bacteria and enzyme. But they don't talk to each other in the sense that humans do. They don't tell each other jokes or ask if they wanna go get coffee or ask to use the bathroom. Lol
 
Bacterial quorum sensing tells all the bacteria to act in unison.
When 2 honeybees come back with news of a good food-rich spot for a new hive, their waggle dances are weighed by the rest of the hive and only one of the spots is selected by quorum vote.

Abstract sophistication of meaning may be lacking, but utilitarian meanings are clearly understood. Survival in nature only requires sufficient ability to survive.

A. Different Ways to Use Language
A proposition is a description of some state of affairs that is either true or false. One way of determining whether a proposition is true is to actually observe the state of affairs it describes and see if that state of affairs corresponds to the description given.
For instance, if someone declares that it is raining outside, one way of determining if this is actually true is to look outside and see if rain is falling. If rain is falling, then the proposition, "It is raining outside" is true. If rain is not falling, then the proposition is false.
However, we do not always use direct observation in order to decide whether a proposition is true or false. Often, we infer the truth or falsity of a proposition.
Although we might not be able to see or hear what is going on outside, if someone enters from outside wearing a wet raincoat and is carrying a wet umbrella, we would normally conclude that the proposition, "It is raining outside" is true.
Logic is concerned with how we reason from certain propositions accepted as true (e.g., Jones has just entered from outside wearing a wet raincoat and carrying a wet umbrella) to different propositions not otherwise known to be true (e.g., It is raining outside.) In everyday life and in formal systems, logic is the study of the forms of correct inference.
more.....

It is similar to mathematics. Humans can use abstract mathematical concepts and mathematical theories, but almost all animals can count in a rudimentary assessment that one quantity appears larger than another.

Rhesus monkeys were tested against college students for instant quantity cognition (without counting) and performed as well as the students.

The rest is forced by natural selection.

I read an article on how humans acquired their large brains.

Human Chromosome 2 is a fusion of two ancestral chromosomes

Alec MacAndrew​
Introduction
All great apes apart from man have 24 pairs of chromosomes. There is therefore a hypothesis that the common ancestor of all great apes had 24 pairs of chromosomes and that the fusion of two of the ancestor's chromosomes created chromosome 2 in humans. The evidence for this hypothesis is very strong.
1677237160085.png

......
Let us re-iterate what we find on human chromosome 2. Its centromere is at the same place as the chimpanzee chromosome 2p as determined by sequence similarity. Even more telling is the fact that on the 2q arm of the human chromosome 2 is the unmistakable remains of the original chromosome centromere of the common ancestor of human and chimp 2q chromosome, at the same position as the chimp 2q centromere (this structure in humans no longer acts as a centromere for chromosome 2.
Conclusion
The evidence that human chromosome 2 is a fusion of two of the common ancestor's chromosomes is overwhelming.
http://www.evolutionpages.com/chromosome_2.htm
 
Last edited:
Mar 13, 2023
2
0
10
Visit site
This study is exciting. It's nice to think that we might be on the verge of a breakthrough.
I think the study can help us better understand and protect the natural world around us. Looking forward to following him.
 
I used to have 50 Golden Comet hens and 4 roosters and harvested several dozen eggs per day.
It is not well known that chickens are great "talkers" and mother hens teach their chicks many useful and life saving habits by talking to them.
The roosters are constantly looking for predators and when the threat is flying overhead they make a high pitched squeal, when it is in the surrounding bushes they lower their voice to indicate a threat on the ground.

I could sit for hours watching their delightful social interactions.

How Do Chickens Communicate With Each Other?
1678790846468.png

All animals have their own unique way of communicating with each other. For chickens, that communication is mostly vocal. Hens and roosters like to make themselves heard, and they use a variety of sounds to indicate different thoughts, feelings, and behaviors. If you spend enough time around your flock, you’ll start recognizing a few of the sounds your birds make. To better understand them, read our guide for how chickens communicate with each other.
CLUCKING
Clucking is one of the most common chicken noises. Hens and roosters both cluck—or chuck, as some people describe it. It’s a conversational sound that chickens make among themselves. Hens will also cluck to their chicks to call them over when they find something interesting to eat or play with.
CACKLING
This is a loud calling noise that hens make after laying eggs. Other hens sometimes join in the call, which might last for a few minutes. Some chicken-keepers say the cackle is a yell of relief after laying the eggs, while others believe it to be a shout of pride.
GROWLING
Like many animals, chickens growl when they feel threatened. Hens commonly growl when sitting on their eggs. It’s a way of warning anyone or anything that disturbs them or gets too close when they’re nesting. Hens often follow this up with an angry peck, so it’s best to heed a chicken’s growl whenever you hear it.
SQUAWKING
Chickens squawk when something startles or scares them. You’ll probably hear this sound when you grab your chickens. Roosters and hens both squawk. Other chickens will react to the noise, but whether they run to or away from the source depends on what’s going on.
ROOSTER SOUNDS
Part of understanding how chickens communicate with each other is understanding the noises that roosters make. For example, the iconic early morning crow is a rooster sound. Roosters crow as a way of announcing and defending their own territories.
They might also make a soft clucking or perp-perp noise to call hens over when they find a good food supply. Roosters might also make fighting sounds when they feel aggressive or threatened.
Chicken sounds are fascinating and endlessly entertaining. Take some time to sit around and listen to them talk to each other—you might learn a thing or two.
The more you know about your chickens, the better you’ll be at taking care of them. You can also learn more about your birds and how to take care of them at Stromberg’s. We have the best chicken coop supplies, materials, and equipment for you and your birds.
 
  • Like
Reactions: Miles

Quorum sensing: cell-to-cell communication in bacteria

By Christopher M Waters, Bonnie L Bassler

DOI: 10.1146/annurev.cellbio.21.012704.131001

Abstract

Bacteria communicate with one another using chemical signal molecules. As in higher organisms, the information supplied by these molecules is critical for synchronizing the activities of large groups of cells.

In bacteria, chemical communication involves producing, releasing, detecting, and responding to small hormone-like molecules termed autoinducers . This process, termed quorum sensing, allows bacteria to monitor the environment for other bacteria and to alter behavior on a population-wide scale in response to changes in the number and/or species present in a community. Most quorum-sensing-controlled processes are unproductive when undertaken by an individual bacterium acting alone but become beneficial when carried out simultaneously by a large number of cells.

Thus, quorum sensing confuses the distinction between prokaryotes and eukaryotes because it enables bacteria to act as multicellular organisms. This review focuses on the architectures of bacterial chemical communication networks; how chemical information is integrated, processed, and transduced to control gene expression; how intra- and interspecies cell-cell communication is accomplished; and the intriguing possibility of prokaryote-eukaryote cross-communication.

Quorum sensing is the regulation of gene expression in response to fluctuations in cell-population density. Quorum sensing bacteria produce and release chemical signal molecules called autoinducers that increase in concentration as a function of cell density. The detection of a minimal threshold stimulatory concentration of an autoinducer leads to an alteration in gene expression. Gram-positive and Gram-negative bacteria use quorum sensing communication circuits to regulate a diverse array of physiological activities. These processes include symbiosis, virulence, competence, conjugation, antibiotic production, motility, sporulation, and biofilm formation. In general, Gram-negative bacteria use acylated homoserine lactones as autoinducers, and Gram-positive bacteria use processed oligo-peptides to communicate. Recent advances in the field indicate that cell-cell communication via autoinducers occurs both within and between bacterial species. Furthermore, there is mounting data suggesting that bacterial autoinducers elicit specific responses from host organisms. Although the nature of the chemical signals, the signal relay mechanisms, and the target genes controlled by bacterial quorum sensing systems differ, in every case the ability to communicate with one another allows bacteria to coordinate the gene expression, and therefore the behavior, of the entire community. Presumably, this process bestows upon bacteria some of the qualities of higher organisms. The evolution of quorum sensing systems in bacteria could, therefore, have been one of the early steps in the development of multicellularity.

See: https://pubmed.ncbi.nlm.nih.gov/11544353/

Bacteria communicate with one another, not with words, but with chemicals called autoinducers. When autoinducer levels start to increase, the bacteria know that there are many other cells around and, as a group, they start to exhibit new behaviors that are only effective when many cells act together.
  • Quorum sensing is involved in biofilm formation (communities of bacteria adhered to surfaces), pathogenesis, symbiosis, and many other processes.

Part I: Journey to Discovery - Quorum Sensing and the Molecules Involved


  • Quorum sensing was originally discovered in bacteria that could take up DNA from the environment and in marine bacteria that emit bioluminescence (make light). We will describe the two experiments that showed for the first time that bacteria communicated with chemical molecules and that they could act in groups.
  • The original findings were considered irrelevant for many years because the results were thought to be isolated to a few obscure bacteria. Later, quorum sensing was shown to be widespread in the bacterial world and crucial for bacteria that cause disease.
  • We will describe how strategies using gene transfer (gain-of-function) and genetic mutations (loss-of-function) contributed to the identification of the proteins involved in quorum sensing and to the understanding of the signal transduction mechanism.

Part II: Knowledge Overview - How bacteria talk to one another

  • Bacteria communicate using small molecules called autoinducers. The autoinducers are released by the bacteria into the environment. The higher the bacterial population density, the higher the concentration of the autoinducer in the environment.
  • Once an autoinducer has reached a threshold concentration, it can bind in sufficient amounts to a partner receptor protein that is made by the bacterium. Autoinducer binding changes the receptor’s activity (either its phosphorylation state or its ability to bind DNA). This change elicits an alteration in the expression of genes encoding proteins that enable group behaviors. The sequence of events from receptor binding to the change in gene expression is called a signaling cascade or signal transduction.
  • One gene that is typically activated by quorum sensing is the gene encoding the enzyme that produces the autoinducer itself. Thus, when the bacteria detect the buildup of autoinducer, they make even more autoinducer. This arrangement is called a “positive feedback loop,” and, in this case, feedback ensures that all nearby cells switch on their quorum-sensing-controlled genes.
  • Quorum-sensing-controlled behaviors are typically ones that require many bacteria acting in concert to make the behavior effective: bioluminescence, virulence factor production, biofilm formation, symbiosis, and release of exo-products (so-called public goods that are shared by all members of the community).
  • Bacteria often live in communities consisting of multiple bacterial species. In such environments, individual bacteria need to be able to detect how many of their own species are present versus how many other species are present.
  • To distinguish self from non-self, bacteria release multiple quorum-sensing autoinducers, some of which only one particular species can sense and some of which can be detected by a few or many bacterial species. The blend of autoinducers perceived by a bacterium corresponds to how many and what bacterial species are nearby. Decoding blends of autoinducers allows bacteria to carry out appropriate quorum-sensing-controlled behaviors both in mono- and multi-species communities.

Part III: Frontiers - Quorum Sensing and Disease

  • Scientists are developing synthetic compounds that disrupt bacterial quorum sensing by blocking autoinducer production or detection.
  • Disabling quorum sensing makes bacteria unable to act as collectives, which decreases bacterial pathogenicity, biofilm formation, etc.
  • Quorum-sensing interference strategies are also found in nature. In mixed bacterial communities, chemical “warfare” is occurring, which allows particular bacteria to cheat, free ride, eavesdrop, and send misinformation!
  • These natural quorum-sensing interference strategies (called quorum quenching) are being used as inspiration to develop antimicrobial medicines, surface coatings that prevent bacterial biofilm formation, and other products for health, industry, agriculture, and the environment.
  • Another horizon for scientists is to discover how quorum sensing works in natural, rather than laboratory, contexts. Challenges include learning how quorum sensing operates in heterogeneous environments that fluctuate in time, space, and bacterial species composition as well as contain eukaryotic hosts and viruses.
No one could have imagined initially that the study of bioluminescence in bacteria would lead to new ideas for biotechnology. Curiosity-driven research often lays the groundwork for pragmatic applications.

See: https://explorebiology.org/learn-overview/cell-biology/quorum-sensing:-how-bacteria-communicate

Quorum sensing is a process of bacterial cell-to-cell chemical communication that relies on the production, detection and response to extracellular signalling molecules called autoinducers found in the bacterial film. Quorum sensing allows groups of bacteria to synchronously alter behaviour in response to changes in the population density and species composition of the vicinal community. Quorum-sensing-mediated communication is now understood to be the norm in the bacterial world. Elegant research has now defined quorum-sensing components and their interactions, for the most part, under ideal and highly controlled conditions.
Hartmann352