Suppose you are a job seeker who has a good idea of what employers want to hear. Like many companies today, your potential new job will provide you with a personality test as part of the hiring process. You plan to give answers that show that you are enthusiastic, that you are hardworking, and that you are a real person.
They then put you on camera while you verbally test and lightly steal during one of your answers, and their facial analysis program decides you’re “tough”.
Sorry, next, please!
This is just one of many issues related to the growing use of artificial intelligence in recruitment, according to the new documentary “Person: The Dark Truth Behind Personality Tests,” which premieres Thursday on HBO Max.
The film, directed by Tim Travers Hawkins, begins with the origins of the Myers-Briggs Type Indicator personality test. Created in the mid-20th century by a mother-daughter team, it sorts people according to four factors: introversion / extraversion, sensation / intuition, thought / feeling, and judging / perceiving. The questionnaire, which has an astrology-like cult following for its 16 four-letter types, has become a recruitment tool used throughout corporate America, along with successors such as the “Big Five.” which measures five main personality traits: openness, awareness, extraversion, pleasantness, and neuroticism.
“Person” argues that the written test contains certain prejudices in the oven; for example, the potential to discriminate those unfamiliar with the type of language or scenarios used in the test.
And, according to the film, incorporating artificial intelligence into the process makes things even more problematic.
The technology analyzes applications written to get red-labeled words and, when it comes to a camera interview, examines applicants for facial expressions that may contradict the answers.
“[It] operates with 19th-century pseudo-scientific reasoning that emotions and character can be standardized from facial expressions, “Ifeoma Ajunwa, an associate professor of law and director of the Research Program, told The Post by email. North Carolina Law University AI Decision.
Ajunwa, who appears in the film, says the potential for bias is huge. “Since automated systems are often formed in white male faces and voices, the facial expressions or vocal tones of women and racial minorities may be underestimated. In addition, there is the privacy concern arising from the compilation. of biometric data “.
A widely used recruitment company, HireVue, would analyze candidates’ facial movements, word choice, and voice before classifying them against other applicants based on an “automatically generated” employability score. the company has since stopped the practice, it announced last month.
While claiming that “visual analysis has no longer added significant added value to the assessments,” the move followed a cry about the potentially detrimental effects.
Cathy O’Neil is a data science consultant, author of “Weapons of Math Destruction: How Big Data Increases Desequality and Threatens Democracy,” and one of the experts interviewed in “Person”. His company, O’Neil Risk Consulting & Algorithmic Auditing (ORCAA), provided a practice audit to HireVue following its announcement.
“No technology is inherently harmful; it’s just a tool, ”he told The Post by email.“ But just as a sharp knife can be used to cut bread or kill a man, facial recognition can be used to harm individuals or communities. “This is particularly true because people often assume that technology is objective and even perfect. If we have blind faith in something deeply complex and deeply opaque, that is always a mistake.”
In recent years there have been a number of legislative actions around the use of facial algorithms. But New York City is the first to introduce a bill that would specifically regulate its use in the hiring process. It would require companies to disclose applicants using the technology and conduct an annual bias audit.
Just as a sharp knife can be used to cut bread or kill a man, facial recognition can be used to harm individuals or communities.
Data science consultant Cathy O’Neil
But Ajunwa believes this is not going far enough. It was “a necessary first step in preserving the civil liberties of the workers.” But “what we need are federal regulations that adhere to federal anti-discrimination laws and that would apply to all states, not just New York City.”
For those who knew Isabel Briggs Myers, seeing the test, hand in hand with artificial intelligence, being used to mercilessly determine whether people are “contractable” seems a long way from their original intention, which was to help users to find their real calls.
As one of Briggs Myers’ granddaughters says in the film, “I think there are forms that are being used that she would like to correct.”