How do you measure emotions in the first place to compare the Outputs and come up with a number?
Our internal measures run between 65% and 100% depending on the emotion and the subject. Excitement is pretty solid – it’s quite easy to measure objectively and we figure it’s over 80% accurate for over 80% of people. Engagement/Boredom is also pretty good and again it’s not hard to measure objectively. Frustration is ok but gets confused with anger a little – they frequently occurred in our training data at the same time. Meditation is really good for those in our testing group who could genuinely meditate. Those are the four basic emotions we measure, i.e., Excitement, Engagement/Boredom, Frustration and Meditation.
The output for each emotion is a floating point number between zero and one. We self-scale the output based on historical patterns of each individual user, so it takes a few hours for the system to settle down for a given subject. Self-scaling provides a useful within-subject scale but makes it very difficult to compare subjects.
The EPOC+ collects EEG biosignals from 14 sensors around the head, plus 2 reference channels in CMS/DRL configuration. They are all EEG sensors.
Performance Metrics detections are the most heavily filtered and the most likely to be shut down temporarily by excess noise. We stayed away from the temptation to link facial expression information into the emotional detections. Lots of frowns would be associated with higher frustration levels, for example, and we could make our life much easier by including the frown rate as an input variable for our Frustration detection. We avoided this because not everyone reacts externally, some people have no ability to frown due to paralysis and people could fool the detection by deliberately frowning a lot. We definitely rely exclusively on brain signals for Performance Metrics, mostly brain signals for Mental Commands (100% if you are an expert and want the best control), but Facial Expressions are all about the muscles!