Friday, March 29, 2024
News

Study suggests to better understand speech, focus on who's talking

   SocialTwist Tell-a-Friend    Print this Page   COMMENT

Washington | October 26, 2021 11:17:15 PM IST
During a recent study, researchers have found that matching the locations of faces with the speech sounds they are producing significantly improves the ability to understand them, especially in noisy areas where other talkers are present.

In the Journal of the Acoustical Society of America, published by the Acoustical Society of America through AIP Publishing, researchers from Harvard University, University of Minnesota, University of Rochester, and Carnegie Mellon University outline a set of online experiments that mimicked aspects of distracting scenes to learn more about how we focus on one audio-visual talker and ignore others.

"If there's only one multisensory object in a scene, our group and others have shown that the brain is perfectly willing to combine sounds and visual signals that come from different locations in space," said author Justin Fleming. "It's when there's a multisensory competition that spatial cues take on more importance."

The researchers first asked participants to pay attention to one talker's speech and ignore another talker, either when corresponding faces and voices originated from the same location or different locations. Participants performed significantly better when the face matched where the voice was coming from.

Next, they found task performance decreased when participants directed their gaze toward a voice trying to distract them.

Finally, the researchers showed spatial alignment between faces and voices was more important when the background noise was louder, suggesting the brain makes more use of audio-visual spatial cues in challenging sensory environments.

The pandemic forced the group to get creative about conducting such research with participants over the internet.

"We had to learn about and, in some cases, create several tasks to make sure participants were seeing and hearing the stimuli properly, wearing headphones, and following instructions," Fleming said.

Fleming hopes their findings will lead to improved designs for hearing devices and better handling of sound in virtual and augmented reality. They look to expand on their work by bringing additional real-world elements into the fold.

"Historically, we have learned a great deal about our sensory systems from studies involving simple flashes and beeps," he said. "However, this and other studies are now showing that when we make our tasks more complicated in ways that better simulate the real world, new patterns of results start to emerge." (ANI)

 
  LATEST COMMENTS ()
POST YOUR COMMENT
Comments Not Available
 
POST YOUR COMMENT
 
 
TRENDING TOPICS
 
 
CITY NEWS
MORE CITIES
 
 
INDIA WORLD ASIA
HC stays opening of liquor vend on Gurud...
ASI continues survey at Bhojshala Comple...
Ten dead as taxi rolls down gorge on Jam...
'Was unwell, not being given treatment,'...
J-K: LG Manoj Sinha expresses deep shock...
'ED wants AAP's Lok Sabha strategy from ...
More...    
 
 Top Stories
"Looking forward to restoring ties ... 
AIADMK's Edappadi K. Palaniswami op... 
"I'm getting fed up...": Adnan Sami... 
"It's going to be nice match-up": S... 
INDIA bloc holds protest against Ke... 
Madhya Pradesh: CM Mohan Yadav to p... 
Russian security services knew of I... 
Foreign investors turn net buyers i...