Subconscious biases like first impressions and gut feelings impact upon hiring practices. For example, research shows that labour markets discriminate based on people’s names. Stereotypically White-sounding names received 50 per cent more callbacks for interviews than African names. The research concluded that, if we were to accept unequal labour market outcomes, then ‘Emily and Greg are more employable than Lakisha and Jamal’.
Evidently, cognitive biases are prevalent in the recruitment process. According to psychologists, the halo effect is the most common cognitive bias in hiring decisions. This is where the recruiter focuses on one positive aspect of the candidate in making an overall judgement. For example, the halo effect can result in superficially attractive people getting jobs at others’ expense. Confirmation bias is another type of cognitive bias. It describes the tendency to interpret information in a way that confirms one’s beliefs. So, if a recruiter has an initial perception about a candidate, they will look for information which will support this perception. Unfortunately, if initial perceptions are tainted by sexist, racist or homophobic thinking, the hiring manager might try to find a way to confirm their existing bias in recruiting candidates.
Can AI end unconscious bias in the recruitment process?
For these reasons, many believe that artificial intelligence (AI) should be introduced in the recruitment process to reduce unconscious bias.
The use of AI in the recruitment process is becoming commonplace. In the UK, 58% of recruiters rely on AI when hiring. This is because AI has the potential to reduce human bias and uncover hidden talent ignored by traditional recruitment practices. AI can be programmed to ignore demographic information about candidates, such as race, gender, sexuality and postcode. Instead, candidates can be judged on their skills and experience alone, whereas human managers are never able to separate the person from their resume. In this respect, AI may help an organization employ a more diverse workforce.
However, this overlooks a crucial fact: AI is fed information by humans in order to learn and produce algorithms – and human information contains biases which can result in discriminatory algorithms. Algorithms which are built on historical data sets of successful applicants are a prime example of this. Historical data sets contain decades of discriminatory hiring practices, so a ‘historical data set of successful applicants will essentially be a male-dominated data set’.
The implications of this were seen in 2014 at Amazon when a group of engineers in Scotland created a program to hire top talent. However, as the algorithm was trained on previous successful applications, it favoured men over women. The algorithm learned to vet out female applicants by penalising words such as ‘women’s chess club’ and downgrading candidates from all-women’s colleges.
In 2017 Amazon abandoned the hiring tool. The failed experiment left tech experts concerned that AI would not remove human biases, but would automate them. Caroline Criado Perez, author of Invisible Women: Exposing Data Bias in a World Designed for Men, argues that algorithms could actively serve to amplify biases. Self-teaching algorithms can learn to confirm and exaggerate what they already know. In other words, if an algorithm is sexist it can teach itself to become more sexist. A University of Washington study showed how an algorithm learned to associate women with kitchen images because women were 33% more likely to feature in photographs in the kitchen than men. The algorithm increased this disparity to 68%, by labelling men as women simply because they were standing next to dirty dishes. Evidently, machine learning can amplify gender biases, reinforcing socio-economic divides in the workplace.
How can we stop the algorithm from having favourites?
Technology is here to stay. Companies are keen to keep algorithmic tools to cut time and expenses in the hiring process. So, how can we mitigate algorithmic biases?
One advantage of using algorithms over human recruiters is that algorithmic bias can, in principle, be identified and corrected. But to correct algorithmic bias would mean changing the data sets we are giving them, or actively teaching them to remove biases. A company called Pymetrics aims to achieve this. Pymetrics uses a neuroscience based-game that measures risk aversion. The top performers in the company complete the game and an algorithm is produced to reveal the key traits of successful workers. Recruiters can then compare candidates to the company’s top-performing employees. Therefore, graduates who might have been lucky enough to work as unpaid interns, study abroad, or use parental connections are not at any advantage over their classmates.
Unfortunately, algorithms like this are the exception, not the rule. In order to remove biases, operators need to actively address factors which contribute to them. The technology companies that generate the algorithmic architectures of the contemporary workplace certainly have a long way to go. In 2016, ten companies in Silicon Valley were found to have not employed a single black woman, three had no black employees at all, and six had no women at the executive level. This severe lack of diversity means that their employees are less likely to understand certain biases, let alone alleviate them.
Algorithms are in part ‘our opinions embedded in code’. Until more is done to alleviate data prejudices, we should not hold out hope that AI can help redress bias in the labour market and the workplace.
Emily Skinner is a Research Assistant on the Bristol Model project at the University of Bristol.