Building AI tools that are less dependent on human behaviour and instead focused on objective and factual outcomes can minimise the biases
Many companies are on a high visibility campaign to recruit more women in line with their gender diversity policy. TCS, for instance, has recently announced the initiative ‘Rebegin’ to attract women with work experience to rejoin after a break. Similarly, several tech companies such as Dell, Microsoft, Accenture and IBM have been wooing women with experience to return to work. At the same time, there is a concern that when women apply for jobs there is an unconscious bias towards women which blocks their entry into the workforce. The entire traditional recruitment process starting with job descriptions, screening, interviewing and selection of the candidates is often biased by traditional mindsets specially towards women.
Recognising the need to overcome gender bias in the recruitment process, firms such as Mozilla and BBC have started using blind hiring, that is, without the need for the candidate to specify gender or name while sending in the application. An industry survey has found that increasingly, companies are using automated tools for hiring for almost every stage of the lifecycle of hiring. Estimates indicate almost 55% of HR leaders in the US are using such tools for hiring. The process of recruitment therefore has got diversified to include tech based assessments that help in knowing the capability of candidates based on test outcomes. Lately, some have started using AI and algorithm to eliminate biases and bring in objectivity in the decision-making process. Language tools supported by AI are enabling managers to word the job descriptions with more care and eliminate gender related words.
Yet there have also been indications that AI tools may end up strengthening the biases leading to worse outcomes for women in the context of hiring. The classic example is that of Amazon which had to scrap its automated recruitment programme as it was discovered that the program had an in-built bias against women. The hiring tool was supposed to rate the candidates on a score of one to five stars, thus highlighting the top candidates that could be considered for hiring. The tool was trained to rate and identify candidates based on an algorithm that had the intelligence drawn from the hiring patterns followed in the previous 10 years when mostly male candidates were hired. As a result the program did not recognise women talent and female candidates were not considered.
It is possible to address such issues and build AI tools that are less dependent on human behaviour and are focused on objective and factual outcomes. In his HBR article, Tomas Chamorro-Premuzic states three ways to get the most out of AI tools which would minimise or eliminate gender discrimination. First, by automating all unstructured interviews and eliminating human ratings, it would be possible to reduce the bias and encourage meritocracy. Next he says, AI tools can be trained to ignore biases such as gender and focus solely on specific competences.
Third, these tools could be trained to identify the actual performance drivers that would assess the human potential valuable for the business.
Even with the best of attempts to make algorithms bias free and enable fairer recruitment processes, biases could still creep into the algorithms. Therefore, algorithmic audits are now proposed to ensure standards of coding are followed and thus minimise the biases.
Originally appeared in Financial Express