[ad_1]
Journalist Malcolm Gladwell, on his podcast, “Revisionist History,” devoted a recent episode to his theory of “Hire nihilism.” According to Gladwell, people are so bad at predicting who will perform well in a given role – especially based on traditional selection criteria such as resumes and candidate interviews – that you just have to admit that any hiring is essentially arbitrary. Gladwell explained that when it was time to find a new assistant or hire an accountant, he did so in an explicitly arbitrary fashion – choosing the person recommended by an acquaintance or the one he met on the street, with only the more superficial face-to-face conversations. . Why waste time on a process that ultimately would not produce a better result than throwing darts?
For decades, a segment of the tech industry has grown on the basis of an acceptance of Gladwell’s premise – that humans are terrible at predicting professional performance – but an outright rejection of its recourse to nihilism. Instead, these tech companies claim that with better filtering tools (which these same companies sell, it’s no accident) this problem can be fixed. More and more, artificial intelligence is part of what these companies sell.
Artificial intelligence offers the promise that there is a hidden constellation of data, too complex or too subtle for HR managers or hiring managers to discern, that can predict which candidate will excel in a given role. In theory, technology offers companies the prospect of radically expanding the diversity of their candidate pool. In practice, however, critics warn, such software is likely to reinforce existing prejudices, making it more difficult to hire women, blacks and others from non-traditional backgrounds. Worse yet, it may mask a process that remains as fundamentally arbitrary and biased as Gladwell argues in an increasingly pseudoscientific veneer.
HireView is one of the leading companies in the field of “hiretech” – its software allows companies to record videos of candidates answering a standard set of interview questions, then sort candidates based on those answers – and he was the target of such a criticism. In 2019, the non-profit electronic privacy clearinghouse filed a complaint against the company with the Federal Trade Commission, alleging that HireVue’s use of AI to assess video interviews of job applicants. one position constituted “unfair and deceptive business practices”. The company says it hasn’t done anything illegal. But, in part in response to criticism, HireVue ad last year, he had stopped using a candidate’s facial expressions in video interviews as a factor taken into account by his algorithms.
Last week, the company also released the results of a third-party audit of its algorithms. The audit mainly gave HireVue high marks for its efforts to eliminate potential bias in its AI systems. But he also recommended several areas where the company could do more. For example, he suggested that the company study the potential biases in the way the system assesses candidates with different emphasis. It also turns out that minority applicants are more likely to give very short answers to questions – one-word answers or saying things like “I don’t know” – that the system has struggled to do. Note, which causes these candidate interviews to be disproportionately reported to human examiners.
Lindsey Zuloaga, the company’s chief data scientist, told me that the most important factor in predicting whether a candidate would be successful was the content of their answers to interview questions. Nonverbal data did not provide much predictive power over the content of a candidate’s responses – in fact, in most cases, it contributed about 0.25% to a model’s predictive power, she says. Even when trying to assess candidates for a role with a lot of customer interaction, nonverbal attributes contributed only 4% to the model’s predictive accuracy. “When you put that in the context of the concerns people had [about potential bias], it was not worth the incremental value we could have derived from it, ”said Kevin Parker, CEO of HireVue.
Parker says the company “always looks for bias in the data that goes into the model” and that it had a policy of rejecting datasets if their use resulted in disparity in results between groups based on factors such as race, sex or age. He also notes that only around 20% of HireVue customers currently choose to use the software’s predictive analytics feature – the rest use humans to review candidate videos – but it’s becoming more and more popular.
HireVue’s audit was carried out by O’Neil Risk Consulting and Algorithmic Auditing (ORCAA), a firm founded by Cathy O’Neil, the mathematician best known for her 2016 book on Algorithmic Decision Making, Weapons of mathematical destruction, and which is one of a handful of companies increasingly specialized in this type of assessment.
Zuloaga says she was struck by the extent to which ORCAA auditors researched different types of people affected by HireVue’s algorithms – from job seekers themselves to clients using the software to data scientists helping to build predictive models. One of the things that came out of the audit, she says, is that some groups of candidates may be more comfortable than others with the very idea of ​​being interviewed by software and having them assessed. this interview by a machine – and therefore be a hidden selection bias integrated into all HireVue data today.
ORCAA recommended HireVue do more to communicate to candidates exactly what the interview process will involve and how their responses will be considered. Zuloaga says HireVue is also learning that minority applicants may need more explicit encouragement from the software in order to continue through the interview process. She and Parker say the company is looking for ways to provide this.
HireVue is among the first to hire a third party to conduct an algorithmic bias audit. And while controlling PR damage may have been part of the motivation – “it’s a continuation of our focus on transparency,” Parker insists – it makes the company a trailblazer. As AI is adopted by more and more companies, it is likely that such audits will become more common. At least, the audit reveals that HireVue is seriously thinking about the ethical issues and AI biases and appearing sincere in seeking to address them. This is an example that other companies should follow. It should also be remembered that the alternative to using technologies like HireVue is not a utopian view of rationality, empiricism and fairness – it is Gladwell’s hiring nihilism. .
And with that, here’s the rest of the AI ​​news from this week.
Jeremy kahn
@jeremyakahn
jeremy.kahn@fortune.com
***
Society’s Addressing Systemic Racism Continues to Emphasize the Importance Companies Should Place on Responsible AI All Executives grapple with thorny questions about responsibility and bias, explore best practices for their business and learn how to set effective industry guidelines on how to use technology. Join us for our second interactive conversation with the Fortune Brainstorm AI community, presented by Accenture, sure Tuesday, January 26, 2021 at 1:00 p.m. to 2:00 p.m. ET.
[ad_2]