Sunday, February 16, 2020

Automated Law: Who Ought to Have the Right to Authoritatively Misread Emotion?

(Elizabeth and Jane Bennet | © Pride and Prejudice (1995)/BBC Productions)

I was intrigued by a recent story that appeared in several media outlets (in this case the on line version of an enterprise that had once operated as a newspaper) which was written for two purposes (Hannah Devlin, AI systems claiming to 'read' emotions pose discrimination risks, The Guardian (UK) 16 February 2020).  The first was to alert/alarm its readers about the existence of facial recognition technology's apparent ability to read emotion through machine learning (artificial intelligence) programs created for that purpose. The second was to suggest (again in terms that accord with the hierarchy of discursive taboos now current in this society) that such abilities are both overstated and would likely violate contemporary core taboos as they are now currently understood. 

The article is necessarily, and quite correctly (given the tenor of the times and the trajectories of contemporary social change), focused on the bias consequences of machine programmed error.  Yet that baseline issue raised by the article, as representative of the quite influential school of critique of which it forms a part, is not so much about error rates in facial recognition of emotion, as it is about the willingness f society to allocate an authority to make judgments, understood as likely to be based on error, on other humans.  The error-making machine merely shines a spotlight on the more fundamental issue of the human decision-maker who is permitted to act on error, but whose proclivity to error is merely ceded to a machine. The issue of who in our political society ought to be vested with the power to authoritatively base decisions on such error with impunity remains under-explored and unresolved. In the face of the coming of the machine, that question may now need to move to center stage.

This post includes the relevant part of that reporting along with brief reflections with a very great nod to Jane Austin and her Pride and Prejudice (1813).


The object of the article is to critically consider the warning made by Lisa Feldman Barrett, professor of psychology at Northeastern University and "one of the world’s leading experts on the psychology of emotion," that the claims made by companies that "Artificial Intelligence (AI) systems . . . can “read” facial expressions is based on outdated science and risks being unreliable and discriminatory." (AI systems claiming to 'read' emotions pose discrimination risks).

Her argument is that
such technologies appear to disregard a growing body of evidence undermining the notion that the basic facial expressions are universal across cultures. As a result, such technologies – some of which are already being deployed in real-world settings – run the risk of being unreliable or discriminatory, she said. (Ibid.).

And, indeed, there is research to support the warning (here, here, here, and here). Of course,these, in turn produce greater difficulty, even where an algorithm can be programed to distinguish among cultures.  To assign a face to a culture might require assuming an identity between physical type and culture.  And ye it is well known that  this will produce error (e.g., among  those raised in a culture but not exhibiting the "racial" or "ethic" characteristics identified as the proxy for culture).

The article notes the human consequences where such propensity toward error is known.
“I don’t know how companies can continue to justify what they’re doing when it’s really clear what the evidence is,” she said. “There are some companies that just continue to claim things that can’t possibly be true.” Her warning comes as such systems are being rolled out for a growing number of applications. In October, Unilever claimed that it had saved 100,000 hours of human recruitment time last year by deploying such software to analyse video interviews. * * * Amazon claims its own facial recognition system, Rekognition, can detect seven basic emotions – happiness, sadness, anger, surprise, disgust, calmness and confusion. The EU is reported to be trialling software which purportedly can detect deception through an analysis of micro-expressions in an attempt to bolster border security.  (Ibid.).
 In effect, the article suggests, that there is money to be made in substituting human for machine facial and body language recognition systems, and to rely on algorithms for giving the sum of those indicators meaning within the context of the space in which the facial and body language recognition data is harvested. More importantly, it argues that because the rate and character of false or erroneous analytics (note that the data harvested remains robust over both false and true analytic judgment) is known and is know to be greater than zero, then such data driven analytics--especially where the judgment, like the analytics and data harvesting is undertaken without human intervention (save for the coding and the construction of the analytics and algorithms)--ought to be abandoned until they can be made more accurate. 
“Based on the published scientific evidence, our judgment is that [these technologies] shouldn’t be rolled out and used to make consequential decisions about people’s lives,” said Feldman Barrett. * * * “AI is largely being trained on the assumption that everyone expresses emotion in the same way,” she said. “There’s very powerful technology being used to answer very simplistic questions.” (Ibid).
It is hard to argue with the thrust of the reporting or the facts on which it is based.  Indeed, the judgment is plausible--that is where an automated system appears to produce error grounded in false or incorrect assumptions about the meaning of data, then it should not be used until known faults are corrected.

But that conclusion really only serves at the beginning of what should be a more profound analysis. That analysis must start with an equally certain conclusion--that human beings are profoundly incapable of reading body language or facial expression with an consistent and robust degree of accuracy. Moreover the error rate increases across cultural differences, and likely social class, gender, and education. Even more likely is the effect of the socio-cultural bias of the reader when confronted with another individual that must be read.

And yet, the thrust of the article s that because of the known error rates of AI emotion reading it must be abandoned in favor of the conventional but also unreliable system of person to person facial and body language recognition "systems."

However, as one reads through the article it is possible to generate a very different set of insights from which a very different set of conclusions might be drawn. Equally disquieting, perhaps, is that these insights that ought to produce substantial discomfort to those intent on criticizing the failure rate of AI facial recognition systems to accurately determine emotion without a deep disquiet over the very same propensity to error that humans (without the aid of machines) are prone to make.In effect, then, the article does not so much suggest that AI facial recognition systems must be abandoned as it serves to advance the argument that human to human facial recognition and body language must continue to be treated as authoritative (or at least not be subject to the same warnings as those made for the error producing machine based systems) even where its sometimes substantial likelihood of error can be rationalized, catalogued and likely measured with some degree of confidence. 

That insight, that human abilities respecting the identification of emotions have long been recognized, and recognized as subject to error, but still treated as authoritative because it is made by those with the power to judge, is deeply culturally embedded. Perhaps the most well known contemporary description of that insight (popularized by a number of movies and television versions of the original book) comes from the pages of Jane Austin's Pride and Prejudice.  The scene takes place immediately after Mr. Darcy's rejected proposal to Elizabeth Bennett in which Austin reproduces the letter sent by Mr. Darcy to Elizabeth Bennett refuting or explaining the accusations she had made to him as she rejected his marriage proposal. Of particular note was the charge that he had no cause to break up the reciprocal romance between Mr. Bingley and her sister.  His response lays out substantially all of the arguments that have now found their way into the debate about facial recognition and emotion detection. 
“I had not been long in Hertfordshire, before I saw, in common with others, that Bingley preferred your elder sister to any other young woman in the country. But it was not till the evening of the dance at Netherfield that I had any apprehension of his feeling a serious attachment. I had often seen him in love before. At that ball, while I had the honour of dancing with you, I was first made acquainted, by Sir William Lucas’s accidental information, that Bingley’s attentions to your sister had given rise to a general expectation of their marriage. He spoke of it as a certain event, of which the time alone could be undecided. From that moment I observed my friend’s behaviour attentively; and I could then perceive that his partiality for Miss Bennet was beyond what I had ever witnessed in him. Your sister I also watched. Her look and manners were open, cheerful, and engaging as ever, but without any symptom of peculiar regard, and I remained convinced from the evening’s scrutiny, that though she received his attentions with pleasure, she did not invite them by any participation of sentiment. If you have not been mistaken here, I must have been in error. Your superior knowledge of your sister must make the latter probable. If it be so, if I have been misled by such error to inflict pain on her, your resentment has not been unreasonable. But I shall not scruple to assert, that the serenity of your sister’s countenance and air was such as might have given the most acute observer a conviction that, however amiable her temper, her heart was not likely to be easily touched. That I was desirous of believing her indifferent is certain—but I will venture to say that my investigation and decisions are not usually influenced by my hopes or fears. I did not believe her to be indifferent because I wished it; I believed it on impartial conviction, as truly as I wished it in reason. (Pride and Prejudice, chapter 35 (emphasis added) [# 42671 ]).
The baseline issue raised by the article, as representative of the quite influential school of critique of which it forms a part, is not so much about error rates in facial recognition of emotion, but rather the issue of who in our political society ought to be vested with the power to make such errors. What appears to be taken for granted is that such a power ought to exist, and that it ought to be vested in those who have the power to render judgment by reason of their social, economic, cultural, or other position that gives them authority over others.  It is to those that the consequences of judgment can have the most long lasting and significant effect.  Yet it is the human system's substantial toleration for error--and its willingness to treat that error as authoritative that ought to inform the ongoing discussion of the consequences of ceding that system to machines. That discussion ought not to focus primarily on the machine and its analytics--but rather on the failings of the system that the machine is merely meant to replicate. That later discussion will prove far more difficult than the one about machine acting like humans.  The real conversation is the one about the consequences of humans acting like humans.

The problem becomes more difficult in the face of the reality that most cultures assume that facial expressions and body language will be read, and that power relationships will determine the consequences of that reading.  More importantly, power relationships will determine the extent to which such readings will be deemed authoritative, or consequential even when they may be wrong.  Teh employment interview is a case in point--where those with the authority to make employment decisions may base that (and indeed are sometimes trained to base that decision in part at least) on reading facial expressions and body language.  Such determinations become authoritative, even when they are wrong.  And there are no mechanisms that can serve the prospective employee to challenge that determination or its use.  That ability to challenge such determinations would be viewed with horror (by employers)( as inverting power orders--the way that employers view with horror the idea of 360 degree review.  Thus, what is protected is the right (1) to make factually erroneous determination about emotions from "reading" faces, and (2)  and the right to treat those erroneous determinations as authoritative (that is as neither subject to disclosure or to challenge).

And that produces the ultimate irony--humans would prefer a system that protects their right to impose erroneous determinations even as the possibility of error is enough to caution against substituting machine judgment.  Where the irony becomes perverse is where that proposition still holds where the rates of error of machine judgments are small than that if human judgments ((e.g., NEVER trust a person’s face: Scientists say it is ‘completely baloney’ that you can read people’s emotions from their expressions ("They found that attempts to detect or define emotions based on facial expressions were almost always wrong.")), and where machine judgments are disclosed and subject to challenge but human human judgments are not.




__________
AI systems claiming to 'read' emotions pose discrimination risks:
Expert says technology deployed is based on outdated science and therefore is unreliable

Hannah Devlin Science correspondent
@hannahdev

Sun 16 Feb 2020 12.00 EST Last modified on Sun 16 Feb 2020 12.11 EST


Artificial Intelligence (AI) systems that companies claim can “read” facial expressions is based on outdated science and risks being unreliable and discriminatory, one of the world’s leading experts on the psychology of emotion has warned.

Lisa Feldman Barrett, professor of psychology at Northeastern University, said that such technologies appear to disregard a growing body of evidence undermining the notion that the basic facial expressions are universal across cultures. As a result, such technologies – some of which are already being deployed in real-world settings – run the risk of being unreliable or discriminatory, she said.

“I don’t know how companies can continue to justify what they’re doing when it’s really clear what the evidence is,” she said. “There are some companies that just continue to claim things that can’t possibly be true.”

Her warning comes as such systems are being rolled out for a growing number of applications. In October, Unilever claimed that it had saved 100,000 hours of human recruitment time last year by deploying such software to analyse video interviews.

The AI system, developed by the company HireVue, scans candidates’ facial expressions, body language and word choice and cross-references them with traits that considered to be correlated with job success.

Amazon claims its own facial recognition system, Rekognition, can detect seven basic emotions – happiness, sadness, anger, surprise, disgust, calmness and confusion. The EU is reported to be trialling software which purportedly can detect deception through an analysis of micro-expressions in an attempt to bolster border security.

“Based on the published scientific evidence, our judgment is that [these technologies] shouldn’t be rolled out and used to make consequential decisions about people’s lives,” said Feldman Barrett.

Speaking ahead of a talk at the American Association for the Advancement of Science’s annual meeting in Seattle, Feldman Barrett said the idea of universal facial expressions for happiness, sadness, fear, anger, surprise and disgust had gained traction in the 1960s after an American psychologist, Paul Ekman, conducted research in Papua New Guinea showing that members of an isolated tribe gave similar answers to Americans when asked to match photographs of people displaying facial expressions with different scenarios, such as “Bobby’s dog has died”.

However, a growing body of evidence has shown that beyond these basic stereotypes there is a huge range in how people express emotion, both across and within cultures.

In western cultures, for instance, people have been found to scowl only about 30% of the time when they’re angry, she said, meaning they move their faces in other ways about 70% of the time.

“There is low reliability,” Feldman Barrett said. “And people often scowl when they’re not angry. That’s what we’d call low specificity. People scowl when they’re concentrating really hard, when you tell a bad joke, when they have gas.”

The expression that is supposed to be universal for fear is the supposed stereotype for a threat or anger face in Malaysia, she said. There are also wide variations within cultures in terms of how people express emotions, while context such as body language and who a person is talking to is critical.

“AI is largely being trained on the assumption that everyone expresses emotion in the same way,” she said. “There’s very powerful technology being used to answer very simplistic questions.”


No comments: