Auditing bias in algorithmic talent assessments

Newer digital strategies & platforms, supported by AI are transforming recruitment within banking & financial services. Explore our AI Risk insights and what this might mean for your organisation.

Background

Despite the ongoing pandemic, the digital economy has been resilient and newer jobs with sought-after skills, especially at junior to mid-levels have seen a resurgence. Glassdoor’s 25 Best Jobs in the UK for 2020 report identifies specific jobs with thousands of open roles. At the same time, the candidate pool, vying for these jobs is at historical highs, with hundreds of applications for each role, more so for new graduates facing the toughest labour market for 75 years.

Employee diversity drives innovation in the workplace, yet even in the best of times, hiring is an intrinsically skewed process. Unconscious biases creep in; racism, ageism, and sexism play a big role in who gets hired.

Overview

To mitigate discrimination, most employers standardise the talent assessment and calibration process. In order to streamline and optimise talent selection at scale, many organisations have also deployed automated assessments and matching engines that use machine learning to surface applicants most likely to succeed. Sophisticated algorithms examine historical data, assessing past shortlists, attributes of successful applicants including specific keywords in CVs, schools attended, universities graduated from, languages spoken, even hobbies and interests. These solutions also recommend roles to job applicants on the basis of the same attributes. This is done at scale using NLP-based inference and recommendation engines that consume data sources including social media profiles, uploaded cover letters, academic transcripts, psychometric analysis and now, also video assessments & coding tests.

But, in reality, such technologies only mirror institutional biases & behaviour by trying to predict a successful hire on the basis of empirical evidence. Infact, as illustrated in the “happy path” below, bias creeps in at every stage in automated recruitment, just as it does in traditional hiring outcomes, the only difference being an inordinate impact due to feedback loops.

 

To make matters worse, these algorithms can & do go wrong. Amazon famously scrapped their recruitment tool precisely for this reason.

Why is this important?

For many individuals, a job offer is a life-changing decision, which is now being increasingly made by algorithms. Policy regulators and the judiciary are now more likely to draw up laws to regulate automated recruitment technologies in various geographies. For example, the New York City Council introduced a bill (Int. 1894-2020) which would regulate the sale of “automated employment decision tools”. This bill, once enshrined into law, will require AI technology vendors to conduct annual bias audits before deploying such tools as well as notify job-seekers on the methodology used to evaluate them. Similarly, Illinois’ Artificial Intelligence Video Interview Act imposes strict limitations on employers who use video interviews for recruiting job candidates.

Closer home, in light of Black Lives Matter and its campaigns that have brought to the fore more awareness of societal injustices, many UK based banks have committed to increasing BAME (Black, Asian & Minority Ethnic) representation in their employee base. When applied to the context of algorithmic recruitment, this necessitates addressing systemic challenges in supporting a diverse applicant pool in the first place.

Recommendations

Here are some best practices to solve for this problem:

  • Create a digital recruitment task force that is diverse and reflects the people whom you intend to hire ensuring awareness in humans-in-the-loop
  • Get buy-in from your third party technology & sourcing vendors
  • Pay strong attention to job specifications and wordings on requirements so that the machine-learning model doesn’t process ambiguous inferences
  • Source text corpora from sufficiently large & diverse data-sets that reflect your hiring targets
  • Ensure that models do not unwittingly use proxy variables to drop out candidates who might otherwise be a good fit
  • Benchmark diversity hiring with industry standards and compare algorithmic hiring outcomes with traditional recruitment processes
  • Conduct bias risk assessments and model audits at regular intervals 

In summary, we think that as risk professionals, it is our absolute responsibility to build the necessary AI risk management capabilities especially for such emerging use cases involving human outcomes. For these reasons, it is worthwhile taking time and getting it right.

We use cookies to enhance site navigation, analyse usage & assist in marketing.