New York Tech Journal
Tech news from the Big Apple

Express Yourself: Extracting #emotional analytics from #speech

Posted on March 7th, 2016

#HUICentral

03/07/2016 @WeWork, 69 Charlton St, NY

20160307_191733[1] 20160307_193123[1] 20160307_200524[1]

Yuval Mor & Bianca Meger @BeyondVerbal talked about the potential applications for their product. BeyondVerbal produces software, including their Moodies smartphone app, that assess one’s emotional state through the intonation of one’s speech.

They take 13 second vocalizations (excluding pauses) and report the speaker’s emotional state based on 432 combined #emotions (12 emotions x 12 emotions x 3 levels of energy), based on 12 basic emotions (which can appear as pairs: 12 x 12) times 3 levels of energy (pushing out/neutral/pulling). They also monitor 3 indices: arousal, valence (pos/neg), temperament (somber/self-controlled/confrontational).

The software can be tricked by actors (and politicians) who are proficient in projecting emotions of the characters they play. They do not do speaker separation and are resilient to some types of background noise. Speech after voice compression may be difficult since various frequencies are removed, however, they have improved their ability to analyze youtube clips. They said there were differences in the diagnostic abilities for phonetic languages vs tonal languages, but many characteristics appear to be cross cultural.

They claim to measuring 100 different acoustic features, but did not provide citations to academic research. Their validation appeared to be primarily internal with a team of psychologists evaluating spoken words.

One potential application is in predicting the onset of a heart attack base on one’s voice versus a prior baseline. They are currently conducting this research on 100 patients at Mayo clinic.

 

posted in:  HUI Centrql, Natural User Interface, NUI Central, UX    / leave comments:   No comments yet