This year it seems like emotion recognition was everywhere at CES, especially in the car. A huge deal was made about Kia’s emotional recognition announcement, but many other companies were showing off similar abilities. Qualcomm, Jungo, Valeo, Nuance, Hyundai Mobis, and Toyota Boshoku all had technology that recognize facial expressions (sometimes augmented with IR heat maps and biometric data) to determine people’s emotional states. As a UX innovation leader, we’re always on the lookout for new technology and we’ve been experimenting and demoing emotion recognition technology ourselves. One thing we find ourselves asking with emotion recognition technology – as we do with all new technology – is what value does it bring to the user?

We seem to be in the “inflated expectations” part of the hype cycle for emotion recognition and CES did not present a truly useful use case. Demos we saw tended to focus on recognizing “bad” emotions like fear, anger, sadness, and boredom, and then “correcting” these states. Much of the incentive for increasing the happiness quotient of a driver comes from naturalistic studies that show a crash risk of 10 times greater for agitated drivers than their emotionally neutral counterparts. Although not necessarily proven, boredom is similarly assumed to be associated with daydreaming or distraction and hence an expected increase in crash risk.

Human emotion is very complicated and there are many deeply individual factors that alter someone’s mood. Perhaps people aren’t looking for their cars to tell them to breathe deeply or smooth over their relationship woes. Should we even assume that the car can bear the responsibility or has the agency to alter people’s mood in the first place? Maybe it’s okay to have “socially unacceptable” emotions like anger, irritation, or sadness within the private confines of the vehicle – especially if we have ADAS features that can ensure safe driving at all times.

Another thing to consider when evaluating the usefulness of emotion recognition and mitigation is whether there could be unintended consequences of trying to moderate people’s behavior. While we can’t foresee the full social impact, it’s easy to imagine worst-case scenarios. Cars that slow down and play soft music in an attempt to calm angry drivers may only make them more agitated, which could have an escalating effect as more and more cars slow down, creating collectively enraged passengers.

So instead of trying to alter emotional states, maybe the right strategy is to complement them. When we’re angry, friends often share our anger, helping us to blow off steam. When we’re sad, good friends don’t invalidate our feelings and try to talk us out of being sad, they listen and commiserate. Perhaps we need an emotionally intelligent car that can pay attention to our behavior and, instead of reading our emotional state in an attempt to “manipulate” us, learns what we like to do when we’re sad, angry, happy, or bored, and supports us.

The Lovot @ CES 2019
The Lovot @ CES 2019

This isn’t to say that emotion recognition and mood alteration is always bad; there may be times when it is good. An anonymous car that picks up a child would be even more invaluable if it could calm an upset or frightened child.

This makes me think that the Lovot companion robot may actually have an auto use case after all. This cute, cuddly robot was anotherCES hit, one that learns owner behavior and bonds over time. Perhaps the best we can do with emotion recognition is to develop ways in which we create stronger bonds with our cars.

j-spiewla
Jacek Spiewla Sr Manager, User Experience

Jacek holds a Master’s in Human-Computer Interaction from the University of Michigan, and has a deep background in speech/audio processing technology, as well as voice user interface design. He is responsible for strategic planning activities and coordinating UX projects.