Quantcast

NYC Gazette

Tuesday, November 5, 2024

Scientists reveal how brains differentiate between music and speech

Webp vfattgaezc76f4j9qzyrc2qqxv4i

Nouriel Roubini, Professor of Economics and International Business at New York University's Stern School of Business | New York University's Stern School of Business

Nouriel Roubini, Professor of Economics and International Business at New York University's Stern School of Business | New York University's Stern School of Business

Music and speech are among the most frequent types of sounds we hear. But how do we identify the differences between the two? An international team of researchers has mapped out this process through a series of experiments, yielding insights that could optimize therapeutic programs using music to help regain speech abilities in individuals with aphasia. This language disorder affects more than 1 in 300 Americans each year, including Wendy Williams and Bruce Willis.

“Although music and speech are different in many ways, ranging from pitch to timbre to sound texture, our results show that the auditory system uses strikingly simple acoustic parameters to distinguish music and speech,” explains Andrew Chang, a postdoctoral fellow in New York University’s Department of Psychology and the lead author of the paper published in PLOS Biology. “Overall, slower and steady sound clips of mere noise sound more like music while the faster and irregular clips sound more like speech.”

Scientists measure signal rates by Hertz (Hz), where a larger number indicates more occurrences per second. For example, people typically walk at a pace of 1.5 to 2 steps per second (1.5-2 Hz). The beat of Stevie Wonder’s 1972 hit “Superstition” is approximately 1.6 Hz, while Anna Karina’s 1967 song “Roller Girl” clocks in at 2 Hz. Speech, in contrast, is typically two to three times faster at 4-5 Hz.

It has been well documented that a song’s volume over time—known as “amplitude modulation”—is relatively steady at 1-2 Hz. By contrast, the amplitude modulation of speech is typically 4-5 Hz, meaning its volume changes frequently.

Despite the ubiquity and familiarity of music and speech, scientists previously lacked a clear understanding of how we effortlessly identify a sound as music or speech.

To better understand this process, Chang and colleagues conducted four experiments involving more than 300 participants who listened to audio segments of synthesized music- and speech-like noise with various amplitude modulation speeds and regularity. Participants were asked to judge whether these ambiguous noise clips sounded like music or speech.

Observing how participants sorted hundreds of noise clips revealed how much each speed and regularity feature affected their judgment between music and speech. The scientists concluded that if there’s a certain feature in the soundwave matching listeners’ ideas of music or speech, even white noise can be perceived as such.

Understanding how the human brain differentiates between music and speech can potentially benefit individuals with auditory or language disorders such as aphasia. Melodic intonation therapy is one promising approach that trains people with aphasia to sing what they want to say by using their intact musical mechanisms to bypass damaged speech mechanisms.

The study's other authors include Xiangbin Teng from Chinese University of Hong Kong, M. Florencia Assaneo from National Autonomous University of Mexico (UNAM), and David Poeppel from NYU’s Department of Psychology and managing director of the Ernst Strüngmann Institute for Neuroscience in Frankfurt, Germany.

The research was supported by a grant from the National Institute on Deafness and Other Communication Disorders (F32DC018205) and Leon Levy Scholarships in Neuroscience.

!RECEIVE ALERTS

The next time we write about any of these orgs, we’ll email you a link to the story. You may edit your settings or unsubscribe at any time.
Sign-up

DONATE

Help support the Metric Media Foundation's mission to restore community based news.
Donate

MORE NEWS