Responsible use of AI in Textkernel solution

Responsible AI

Bias in AI

What is bias and why is it a problem?

The process of recruitment strives to find the perfect candidate for an employment position. Ideally the candidate should fulfill all required criteria to be able to execute the job as well as possible.

However, in reality human judgment tends to be less objective. In practice factors that are not necessary to execute the job satisfactorily will play a role in the selection process. For example, recruiters may take ethnicity, gender, familiar education institutions or companies into account when making such a decision, often without them even realizing it. The recruiter is in most cases not even aware of this effect. This is called ‘unconscious bias’.

Numerous studies have confirmed that in HR, unconscious bias is a significant factor in causing unfair distribution of opportunities and decreasing diversity on the labour market.

Bias in AI

Bias in AI

  1. Just like a recruiter; Artificial Intelligence (AI) recruitment software can have biases.

    To understand how that happens we need to understand on a very high level how AI systems work. Don't worry, we'll also explain how Textkernel prevents biases appearing in their AI systems, and how our AI systems can increase fairness and diversity in your recruitment process.

  2. AI systems are learning systems. They are 'taught' to make a prediction based on training data.

    For example, an AI system can learn to predict which candidates might be suitable for a job, based on previous hiring decisions made by recruiters. The system may learn that if a candidate mentions experience with programming languages, that person may be suitable for a software engineer job. After all, in the past successful candidates for software engineer jobs will have had experience with programming languages.

Real world applications

Responsible use of AI in real world applications

Now that we’ve looked at how AI can be harmful when used carelessly, it’s time to look at how to use AI in a safe and ethical manner. This is called Responsible AI. In fact, when used responsibly AI can help reduce bias, instead of amplifying it.

Designing systems around user and AI bias

  1. There are numerous examples of successful employment of AI algorithms in a wide variety of applications, where bias is unlikely to occur.

    Think of for example spam filtering. This application is not very sensitive to introducing or perpetuating bias, because it operates in a problem space that doesn't rely on factors that may be related to sensitive data. Deciding if an email contains spam is completely independent of the ethnicity, gender and religion of the user.

  2. Unfortunately, in the last few years, there have also been numerous examples in the news of AI systems making biased decisions.

    For example, a bank deciding whether to give someone credit or not, or a government deciding whether someone is a risk for fraud with social welfare. But we’ve also seen famous examples of AI-based candidate-job matching gone wrong. The common problem in all these cases? Letting AI mimic previous human decisions for problems that are very sensitive to bias and directly affect the lives of real people. The kinds of decisions in these examples require, on top of the “hard” data, common sense, intuition and empathy. All which AI doesn’t have in the current state of technology.

Mitigating bias in AI

  1. Whenever AI is the right tool for the job, it is crucial to have the right checks and balances in place.

    This will ensure that any bias that may arise from the AI solution is minimized. Recently, many tech giants (e.g. Google, IBM etc) have formalized processes to minimize the bias that their machine learning algorithms produce. Following in the steps of these companies, Textkernel has also formalized such processes in a Fairness Checklist. By following this checklist, we are fully aware of any potential biases that may arise. Based on it, we can decide if we must take measures to mitigate bias and ensure that we develop safe and unbiased software.

  2. Let’s look at some examples of these measures in the context of profession normalization. This is the process of ‘normalizing’ a free form job title to a concept.

    For example, there are many ways to write “Java Developer”, ranging from “J2EE Full Stack Engineer” to “Java Ninja” and everything in between. This is in fact a lower risk problem for AI as the result of the job title normalization is not influenced by a person’s ethnicity or religion but only by the free form job title.

Responsible AI

The Textkernel solution

Document understanding

Searching and matching

Responsible use of AI in Textkernel solution

Reducing human unconscious bias

Document understanding

The first step of any automated recruitment process is to understand the data. Our Extract! product is a perfect example of this. Understanding a document means to be able to extract the relevant information from a document and enrich it with domain specific knowledge. For example, when we parse a CV, the system reads what work experience the candidate has, but also which skills and degrees he or she possesses and so on (i.e. extraction ). On top of that, it can also standardize the job title and skills to existing taxonomies (i.e. normalization ), derive in which work field the candidate is working, or infer likely skills for that candidate, even though these things are not explicitly mentioned in the document (i.e. enrichment).

We can apply the same process of extraction and enrichment to a job posting, to give us the structured information for the job. In the case of job postings, this entails things like the required experience level, skills, and degree etcetera.

Searching and matching

This extracted and enriched knowledge is a very powerful tool for searching and matching. For example, understanding a document allows us to search only on professional skills instead of keyword matching on the entire document; or we can search on normalized job titles so we can find the candidate no matter how she/he expressed their job titles. This leads to a more accurate search. Another example is that we can search on inferred information (e.g. the experience level for a candidate, even though that experience level was not explicitly mentioned in their profile). Enrichment is useful not only for documents but also for search queries. For example, we can add synonyms or related terms to the search query.

Knowing all qualifications of candidates and all the requirements for job postings allows us to automate one more step: matching. To achieve this, we automatically generate a search query given an input document. Let’s say we want to match all suitable candidates for a given job, the search query will contain all required and desired criteria for that vacancy. Each criterion will have its own appropriate weight to optimize the quality of the result set of the query.

Responsible use of AI in Textkernel solution

Why does all this matter? Well, most importantly: AI doesn’t do matching for you. The matching is done in a term-based search engine. We employ powerful AI algorithms only for document understanding (to extract information and enrich documents and queries) but leave the matching part to more transparent and controllable algorithms. This way we give the recruiter full control over the matching, and benefit from our AI-driven world leading parsing capabilities. 

However, even when employing transparent and controllable algorithms, bias may arise through properties of the language. For example, a simple term-based search on “waiter” will favor male candidates for that job, since the job title is male by definition. Enrichment of search and match queries helps reduce this type of bias. When recruiting for a waiter job, the query will be automatically enriched with the waitress job title to remove gender bias inherent to that job title. A similar bias reduction can be achieved by normalizing the job titles (as discussed before), normalizing skills and using it in queries: this ensures that no matter how the candidate expresses a skill or previous experience, the concept will still be matched.

To control any bias that could potentially arise in the AI-powered document understanding steps of the process, we enforce our R&D Fairness Checklist.

Reducing human unconscious bias

Having fully controllable and transparent matching has another benefit: by matching on objective criteria, we may actually mitigate any unconscious bias that a recruiter may have. This will improve equal opportunities and diversity in your HR processes. 

Current research suggests that if used carefully, AI can help avoid discrimination, and even raise the bar for human decision-making.

Of course the user is unable to search on any discriminatory attributes when searching with Textkernel Search!, like gender or religion.

Responsible AI

Textkernel’s AI Principles

Our approach to Responsible AI is informed by our core AI principles.

  1. AI Driven By Humans

    Humans are in control of our solutions and understand what they are doing to achieve the results that they strive to obtain. Our AI will not make any decisions for you but will support you by taking over time-consuming processes to increase your efficiency. Our products are designed so that our users can always evaluate, and override suggestions provided by the technology and remain the final decision maker.

  2. Transparency of results and white box AI approach

    We strive to ensure that our AI and the way in which our solutions work is always explainable. For more complex tasks, like matching candidates to jobs, we advocate a strong explainability and transparency. Our solution can indicate exactly which criteria are used to construct the match, in a way that the end user can interpret, understand, and influence.

  3. Diversity-Forward

    Matching based on objective, measurable criteria will reduce/eliminate bias from your recruitment process. Our solution will disregard any candidate properties that are irrelevant for successful execution of the job (e.g. gender, ethnicity, age, etc), promoting diversity and inclusion within your organization.

  4. Robust data protection and security

    To ensure the trustworthiness of our AI, we have in place robust security measures and mechanisms to protect (personal) data against potential attacks throughout all phases of the AI lifecycle.

On-demand webinar

How to use AI responsibly in recruitment

AI is a powerful tool for recruiters. However, just like recruiters and people in general, AI systems can have biases unless the right measures are put in place. In this webinar, you will learn:

– What is bias and why is it a problem
– How do biases emerge in AI systems
– How to apply AI in a responsible manner
– How Textkernel implements AI responsibly

Textkernel AI solution
See Textkernel in action

Schedule a Demo

We’d be pleased to share more details about our technology, how we work with our diverse customer base to deliver great talent acquisition and management solutions, and of course, pricing.