Bias in AI
What is bias and why is it a problem?
The process of recruitment strives to find the perfect candidate for an employment position. Ideally the candidate should fulfill all required criteria to be able to execute the job as well as possible.
However, in reality human judgment tends to be less objective. In practice factors that are not necessary to execute the job satisfactorily will play a role in the selection process. For example, recruiters may take ethnicity, gender, familiar education institutions or companies into account when making such a decision, often without them even realizing it. The recruiter is in most cases not even aware of this effect. This is called ‘unconscious bias’.
Numerous studies have confirmed that in HR, unconscious bias is a significant factor in causing unfair distribution of opportunities and decreasing diversity on the labour market.
Bias in AI
Bias in AI
Just like a recruiter; Artificial Intelligence (AI) recruitment software can have biases.
To understand how that happens we need to understand on a very high level how AI systems work. Don't worry, we'll also explain how Textkernel prevents biases appearing in their AI systems, and how our AI systems can increase fairness and diversity in your recruitment process.
AI systems are learning systems. They are 'taught' to make a prediction based on training data.
For example, an AI system can learn to predict which candidates might be suitable for a job, based on previous hiring decisions made by recruiters. The system may learn that if a candidate mentions experience with programming languages, that person may be suitable for a software engineer job. After all, in the past successful candidates for software engineer jobs will have had experience with programming languages.
Reducing human unconscious bias
Having fully controllable and transparent matching has another benefit: by matching on objective criteria, we may actually mitigate any unconscious bias that a recruiter may have. This will improve equal opportunities and diversity in your HR processes.
Current research suggests that if used carefully, AI can help avoid discrimination, and even raise the bar for human decision-making.
Of course the user is unable to search on any discriminatory attributes when searching with Textkernel Search!, like gender or religion.
Textkernel’s AI Principles
Our approach to Responsible AI is informed by our core AI principles.
AI Driven By Humans
Humans are in control of our solutions and understand what they are doing to achieve the results that they strive to obtain. Our AI will not make any decisions for you but will support you by taking over time-consuming processes to increase your efficiency. Our products are designed so that our users can always evaluate, and override suggestions provided by the technology and remain the final decision maker.
Transparency of results and white box AI approach
We strive to ensure that our AI and the way in which our solutions work is always explainable. For more complex tasks, like matching candidates to jobs, we advocate a strong explainability and transparency. Our solution can indicate exactly which criteria are used to construct the match, in a way that the end user can interpret, understand, and influence.
Matching based on objective, measurable criteria will reduce/eliminate bias from your recruitment process. Our solution will disregard any candidate properties that are irrelevant for successful execution of the job (e.g. gender, ethnicity, age, etc), promoting diversity and inclusion within your organization.
Robust data protection and security
To ensure the trustworthiness of our AI, we have in place robust security measures and mechanisms to protect (personal) data against potential attacks throughout all phases of the AI lifecycle.