AI & RoboticsNews

Emotion AI: A possible path to thought policing

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


A recent VentureBeat article referenced Gartner analyst Whit Andrews saying that more and more companies are entering an era where artificial intelligence (AI) is an aspect of every new project. One such AI application uses facial recognition to analyze expressions based on a person’s faceprint to detect their internal emotions or feelings, motivations and attitudes.

Known as emotion AI or affective computing, this application is based on the theory of “basic emotions” [$], which states that people everywhere communicate six basic internal emotional states — happiness, surprise, fear, disgust, anger and sadness — using the same facial movements based on our biological and evolutionary origins.

On the surface, this assumption seems reasonable, as facial expressions are an essential aspect of nonverbal communications.

A recent paper from tech industry analyst firm AIMultiple states emotion AI is an emerging technology that “enables computers and systems to identify, process, and simulate human feelings and emotions.” It is an interdisciplinary field that blends computer science, psychology and cognitive science to aid businesses to make better decisions, often to improve reliability, consistency and efficiency.

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.


Register Here

How emotion AI is being utilized

Among its current uses, emotion AI software is widely deployed for scoring video interviews with job candidates for characteristics such as “enthusiasm,” “willingness to learn,” “conscientiousness and responsibility” and “personal stability.” The software is also used by border guards to detect threats at border checkpoints, as an aid for detection and diagnosis of patients for mood disorders, to monitor classrooms for boredom or disruption, and to monitor human behavior during video calls.

The use of such technology is growing in popularity. In South Korea, for example, the use of emotion AI has become so common in job interviews [$] that job coaches often make their clients practice going through AI interviews. Startup EmotionTrac markets software for lawyers to analyze expressions in real time to figure out what arguments will land with potential jurors [$]. Tel Aviv University developed a technique to detect a lie through facial muscle analysis and claimed 73% accuracy. Apple has been granted a patent for “modifying operation of an intelligent agent in response to facial expressions and/or emotions.”

Emotion AI is based on pseudoscience

However, emotion AI is rife with ambiguity and controversy, not least because researchers have determined that facial expressions vary widely between contexts and cultures. And there is considerable evidence [$] that facial movements vary too widely to be consistent signals of emotional meaning. Some argue that alleged universal expressions upon which recognition systems are built simply represent cultural stereotypes. Moreover, there is growing evidence that the science upon which emotion detection is built is wrong, claiming there is insufficient evidence to support the thesis that facial configurations accurately, reliably and specifically reflect emotional states.

Quoting Sandra Wachter, futurist Tracey Follows tweeted the technology has “at its best no proven basis in science and at its worst is absolute pseudoscience.”

AI ethics scholar Kate Crawford goes a step further, concluding [$] there is no good evidence that facial expressions reveal a person’s feelings. Thus, decisions taken based on emotion AI are fraught with uncertainty.

This concern is causing at least some companies to pull back from developing or deploying emotion AI. Microsoft recently updated their Responsible AI Standard framework that guides how they build AI systems to ensure more beneficial and equitable outcomes and foster trustworthy AI. One outcome of their internal review of AI products and services using this framework is the “retiring” of capabilities within Azure Face “that infer emotional states and identity attributes.” According to the company, the decision was based on a lack of expert consensus on how to infer emotions from appearance, especially across demographics and use cases, and because of privacy concerns. In short, the company is demonstrating responsible use of AI or at least how to avoid possibly deleterious impacts from the technology.

Even with these evident concerns, the market for emotion AI is surging, forecast to grow at a compound annual growth rate of 12% through 2028. Venture capital is continuing to flow into the field. For example, Uniphore, a company that currently offers software incorporating emotion AI, recently closed $400 million in series E funding with a valuation of $2.5 billion.

Pandora’s box

Similar emotion AI technology has been in use by businesses to improve productivity for several years. An Insider article reported that employers in China use “emotional surveillance technology” to modify workflows, including employee placement and breaks, to increase productivity and profits.

It is not only businesses that are interested in this technology. According to recently published [$] reports, the Institute of Artificial Intelligence at Hefei Comprehensive National Science Center in China created an AI program that reads facial expressions and brain waves to “discern the level of acceptance for ideological and political education.” Test subjects were shown videos about the ruling party while the AI program collected and processed the data. It then returned a score that indicated whether the subject needed more political education and assessed whether they were sufficiently loyal. According to The Telegraph article, the scoring included the subject’s “determination to be grateful to the party, listen to the party and follow the party.”

Every wave of innovation creates winners and losers and brings elements that can harm segments of the population. In the case of emotion AI, many of the uses are a combination of intrusive surveillance and Taylorism, which is a questionable mixture. Moreover, the field is based upon a shaky and likely false scientific premise. Nevertheless, the application of emotion AI is unfettered except by public opinion, since AI uses remain largely unregulated around the world.

Neuroscience News asks the relevant question of whether we would want such intimate surveillance in our lives even if emotion AI could be engineered to accurately read everyone’s feelings. This question goes to the central issue of privacy. While there may be positive use cases for emotion AI – assuming it was based on valid science – it nevertheless presents a slippery slope that could lead toward Orwellian Thought Police.

Gary Grossman is the senior VP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Gary Grossman, Edelman
Source: Venturebeat

Related posts
AI & RoboticsNews

The show’s not over: 2024 sees big boost to AI investment

AI & RoboticsNews

AI on your smartphone? Hugging Face’s SmolLM2 brings powerful models to the palm of your hand

AI & RoboticsNews

Why multi-agent AI tackles complexities LLMs can’t

DefenseNews

US Army buys long-flying solar drones to watch over Pacific units

Sign up for our Newsletter and
stay informed!