Scientists show how brain differentiates lyrics from music

Image
ANI Science
Last Updated : Feb 28 2020 | 5:46 PM IST

The perception of speech and music - two of the most uniquely human uses of sound - is enabled by specialized neural systems in different brain hemispheres adapted to respond differently to specific features in the acoustic structure of the song, a new study has found.

Though it's been known for decades that the two hemispheres of our brain respond to speech and music differently, this study used a unique approach to reveal why this specialization exists, showing it depends on the type of acoustical information in the stimulus.

Music and speech are often inextricably entwined and the ability for humans to recognize and separate words from melodies in a single continuous soundwave represents a significant cognitive challenge.

It's thought that the perception of speech strongly relies on the ability to process short-lived temporal modulations, and for melody, the detailed spectral composition of sounds, such as fluctuations in frequency.

Previous studies have proposed a left- and right-hemisphere neural specialization for handling speech and music information, respectively.

However, whether this brain asymmetry stems from the different acoustical cues of speech and music or from domain-specific neural networks remains unclear.

By combining ten original sentences with ten original melodies, Philippe Albouy and colleagues created a collection of 100 unique a cappella songs, which contained acoustic information in both the temporal (speech) and spectral (melodic) domain.

The nature of the recordings allowed the authors to manipulate the songs and selectively degrade each in either the temporal or spectral domain.

Albouy and his team found that degradation of temporal information impaired speech recognition but not melody recognition. On the other hand, the perception of melody decreased only with spectral degradation of the song.

Concurrent fMRI brain scanning revealed asymmetrical neural activity; decoding of speech content occurred primarily in the left auditory cortex, while melodic content was handled primarily in the right.

Disclaimer: No Business Standard Journalist was involved in creation of this content

*Subscribe to Business Standard digital and get complimentary access to The New York Times

Smart Quarterly

₹900

3 Months

₹300/Month

SAVE 25%

Smart Essential

₹2,700

1 Year

₹225/Month

SAVE 46%
*Complimentary New York Times access for the 2nd year will be given after 12 months

Super Saver

₹3,900

2 Years

₹162/Month

Subscribe

Renews automatically, cancel anytime

Here’s what’s included in our digital subscription plans

Exclusive premium stories online

  • Over 30 premium stories daily, handpicked by our editors

Complimentary Access to The New York Times

  • News, Games, Cooking, Audio, Wirecutter & The Athletic

Business Standard Epaper

  • Digital replica of our daily newspaper — with options to read, save, and share

Curated Newsletters

  • Insights on markets, finance, politics, tech, and more delivered to your inbox

Market Analysis & Investment Insights

  • In-depth market analysis & insights with access to The Smart Investor

Archives

  • Repository of articles and publications dating back to 1997

Ad-free Reading

  • Uninterrupted reading experience with no advertisements

Seamless Access Across All Devices

  • Access Business Standard across devices — mobile, tablet, or PC, via web or app

More From This Section

First Published: Feb 28 2020 | 5:32 PM IST

Next Story