Want to hone your public speaking skills? Google Glass may help!
Researchers have developed a new system that uses Google Glass to provide real-time feedback to the speaker on volume modulation and speaking rate.
Smart glasses installed with the intelligent user interface called Rhema can record a speech, transmit the audio to a server to automatically analyse the volume and speaking rate, and then present the data to the speaker in real time.
This feedback allows a speaker to adjust the volume and speaking rate or continue as before.
Researchers at University of Rochester explained that providing feedback in real-time during a speech presents some challenges.
"One challenge is to keep the speakers informed about their speaking performance without distracting them from their speech," researchers said.
"A significant enough distraction can introduce unnatural behaviours, such as stuttering or awkward pausing. Secondly, the head mounted display is positioned near the eye, which might cause inadvertent attention shifts," they said.
Iftekhar Tanveer, the lead author of the research paper, explained that overcoming these challenges was their focus.
To do this, they tested the system with a group of 30 native English speakers using Google Glasses. They evaluated different options of delivering the feedback.
They experimented with using different colours (like a traffic light system), words and graphs, and no feedback at all (control).
They also tried having a continuous slowly changing display and a sparse feedback system, by which the speaker sees nothing on the glasses for most of the time and then just sees feedback for a few seconds.
After user-testing, delivering feedback in every 20 seconds in the form of words ("louder," "slower," nothing if speaker is doing a good job, etc) was deemed the most successful by most of the test users.
The researchers also highlight that the users, overall, felt it helped them improve their delivery compared to the users who received continuous feedback and no feedback at all.
"We wanted to check if the speaker looking at the feedback appearing on the glasses would be distracting to the audience," said Ehsan Hoque, assistant professor of computer science and senior author of the paper.
"We also wanted the audience to rate if the person appeared spontaneous, paused too much, used too many filler words and maintained good eye contact under the three conditions: word feedback, continuous feedback, and no feedback," said Hoque.
However, there was no statistically significant difference among the three groups on eye contact, use of filler words, being distracted, and appearing stiff.