
The UK’s recruitment arena is undergoing rapid transformation. A widening skills gap, fierce competition for top-tier talent, and evolving workforce dynamics are reshaping the talent acquisition landscape. With the complexities of recruitment in the UK, merely having insights isn’t enough. The real game-changer lies in your capability to transform data and insights into actionable strategies.
Recently Grant Telfer, our Sales Director for UK&I, was invited by Matt Alder as a guest on The Recruiting Future Podcast. He shared valuable insights into data-driven recruitment decisions in today’s talent landscape. By harnessing data insights, UK businesses can fine-tune their recruitment strategies and make well-informed decisions to secure the right talent match.
Let’s take a closer look at the topic of data-driven recruitment and what it means for organisations trying to find and hire talent in the UK.
The Current State of Recruitment in the UK
The UK’s recruitment landscape is more than a reflection of global challenges; it’s a distinct ecosystem shaped by its own set of unique challenges and nuances. Some of the pressing issues impacting the UK include:
- Economic Shifts: Factors such as the global economic downturn, fluctuations in international trade dynamics, regional policy changes, and the financial implications of Brexit and public health crises like the COVID-19 pandemic have all had a profound impact. These shifts have influenced not only business operations but also the availability, demand, and mobility of talent across sectors.
- Technological Innovations: The rapid pace of technological advancements means that the demand for new skills is continuously evolving. Roles in AI, data science, and green technologies, for example, are burgeoning, creating a race for specialised talent.
- Changing Job Roles: The modern UK workplace is transitioning. With factors like increased remote working, the gig economy, and the rise of digital platforms, the definition of ‘work’ and ‘roles’ is evolving.
“Hiring is and remains business critical. It has to be right up front and centre in terms of organisations and their goals. And people are saying it’s difficult…” Grant explains, and according to recent research by Josh Bersin, more than 80% of the companies they’ve talked with worry about not being able to hire the right people.
The Right Data Helps Make Better Recruitment Decisions
After speaking to leaders in recruitment, Grant noticed that they are constantly asking ”Have we got the right data? Have we got enough data?…but people have mountains of data and they’re finding it really hard to consume.”
Data, and extracting the right insights from the data, emerges as a pivotal tool in navigating these challenges, offering clarity on market trends and guiding proactive strategising.
Here’s why:
- Informed Decisions: Data provides real-time insights into the talent market, helping recruiters understand where potential candidates are, their preferences, and the best channels to reach them.
- Predictive Analysis: Data allows for forward-thinking. Instead of merely reacting to vacancies, recruiters can anticipate skill gaps, enabling them to strategise proactively.
- Efficiency: Access to the right data and knowing what to do with it can streamline recruitment processes. By understanding what works and what doesn’t, companies can refine their strategies, making the recruitment process smoother and more efficient.
- Competitive Advantage: In a tight talent market, data-driven insights give companies the edge. They can identify emerging trends, target their recruitment marketing more effectively, and ultimately secure top talent before their competitors.
“It’s very clear that that data really sits at the centre. It’s perhaps also clear, though, that despite all of these sources of data, employers are struggling to make data-based decisions,” Matt Alder explains.
Smarter Data-Based Decision Making: Harnessing AI & Data
In the UK recruiting landscape, two pivotal factors are on the rise: Artificial Intelligence (AI) and data. While their potential is widely understood, the main challenge lies in effectively leveraging and applying them.
AI in Recruitment: AI isn’t a distant concept; it’s very much shaping today’s recruitment strategies in the UK. Its strength isn’t only in automating processes but in enhancing precision and more accurate matching capabilities. AI-powered tools can offer insights beyond human capacity, ensuring a more accurate match between candidates and roles. Moreover, AI promises an unbiased recruitment process, minimising inherent biases that may skew decisions.
54% of employers believe AI technologies will have a positive impact, according to a recent survey done by Experis and ManpowerGroup.
Using Data Effectively: The right data, when used in conjunction with AI, holds significant value. Having large volumes of data is not the end goal, but rather, it’s about understanding and applying this data effectively. This involves more than just gathering insights; it’s about using those insights to shape your recruitment strategies. This allows you to understand the market, forecast future job needs, grasp what candidates want, and adapt your recruitment strategy to meet these requirements.
Together, AI and data provide a blueprint for recruitment success. For UK organisations, this means a more streamlined hiring process, reduced recruitment costs, and securing talent that aligns with organisational objectives.
Preparing for a Data-Driven Future in the UK
The future of recruitment in the UK is deeply anchored in data. To navigate this, the first step is foundational: understand and know your data. Grant Telfer emphasises the significance of knowing not just where the data is but also understanding the specific data you need to harness.
Grant further elaborates on the transformative potential of data in recruitment:
“Use [data] to analyse what’s going on in the market. Make placements more quickly and then automate the process around it.”
Automation, however, shouldn’t just be for the sake of technological advancement. The goal is twofold: to streamline the recruiter’s workflow, enhancing their day-to-day experience, and to simplify the process for candidates. By making things easier for both recruiters and candidates, businesses can ensure a more efficient and user-friendly recruitment process.
By prioritising data-driven insights and ensuring their teams have the tools and training to leverage them, UK businesses are poised to overcome current recruitment challenges and thrive in the future landscape.
Dive Deeper into Data-Driven Recruitment
Inspired by the insights shared in this article? Take the next step in your data-driven recruitment journey. Listen to the full episode with Grant Telfer on The Recruiting Future Podcast to uncover more strategies and expert opinions.
Looking to harness the full potential of your data to make better recruitment decisions? We’re here to help. Our AI-powered recruitment solutions are designed to guide organisations through the complexities of modern hiring.

The Fear of Job Displacement: History Repeating Itself?
Before we plunge into the depths of AI-powered recruitment technology, let’s address the elephant in the room – the age-old fear of technology, and in 2023 that is AI, taking over human jobs. Throughout history, innovations like William Lee’s carpet-making machine have raised concerns about job displacement. However, just as the automobile provided more jobs than it eliminated, AI technology in recruitment has the potential to empower, not replace, recruiters.


Our previous post in this blog series envisions that LLMs will have a major impact on recruitment technology, including parsing and matching software. But effectively adopting LLMs in production software is not a straightforward job. Various technical, functional and legal hurdles need to be overcome. In this blog post, we discuss the inherent limitations and risks that come with using LLMs in recruitment and HR technology.
Limitation 1: Speed and cost
LLMs are computationally very expensive: processing a single page of text requires computations across billions of parameters, which can result in high response times, especially for longer input documents. Performing complex information extraction from a multi-page document (like CV parsing) can take up to tens of seconds. For certain uses, these latencies can be acceptable. But less so for any task that requires bulk processing of large volumes of documents.
Apart from response time, computational complexity comes with a financial cost. LLMs generally require many dedicated GPUs and much more processing power than standard deep learning models. The amount of electricity used to process a single document is estimated to be substantial. Although costs have already dropped significantly in recent months, using heavy, general purpose machines like LLMs for very specific (HR) tasks is not likely to ever be the most cost-effective option.
Consequences for recruitment software
When dealing with small volumes of resumes or vacancies, speed and cost don’t need to be limiting factors. But many organizations deal with thousands or even millions of documents in their databases. High processing latencies could translate into weeks of waiting time for a large database. It stands to reason that organizations with high document volumes require fast and affordable parsing and matching solutions.
An important note about this limitation is that it’s likely to decline over time. There is a lot of research in the AI community toward reducing the size of the LLMs, making them more specialized and reducing costs. Given the nature of the beast, LLMs will never be feather-light, but it’s likely that speed and cost will be brought down to acceptable levels over the coming years.
Limitation 2: Hallucinations
LLMs have one main objective: to produce language that will be perceived as ‘natural’ by humans. They are not designed to produce truthful information. As a result, a common complaint about LLMs (including ChatGPT) is that they tend to ‘hallucinate’: they can produce high quality text which contains factually incorrect information. The LLM itself will present these hallucinations with full conviction. Wikipedia states the following example:
Asked for proof that dinosaurs built a civilization, ChatGPT claimed there were fossil remains of dinosaur tools and stated “Some species of dinosaurs even developed primitive forms of art, such as engravings on stones”.
Not all hallucinations are as innocent as this. There are reports of ChatGPT supplying false information about sensitive topics like the safety of COVID-19 vaccinations or the validity of the US elections in 2020.
Consequences for recruitment software
In the context of CV parsing, hallucination could mean that the output contains information that was not present in the original document. We’ve seen quite a few examples of this in our own experimentation: mentions of work experiences or educational degrees appear in the output while not being mentioned anywhere in the submitted CV. This could obviously lead to confusion among users and, if gone unnoticed, yield rather surprising job recommendations.
How hard is it to solve this problem? One obvious approach is to simply check that the output terms appear in the input document and discard it if that’s not the case. However, there’s a risk of throwing out the baby with the bathwater: in some cases LLMs correctly infer information, and the ‘unmentioned’ parts of the output can be correct. For instance, the company someone worked at could be correctly inferred based on the graduate program mentioned in a CV (while the company itself is not mentioned). These inferences can actually add value on top of traditional CV parsers. The challenge is to figure out which of the inferences made by the LLM are safe to keep.
Limitation 3: Lack of transparency
A major limitation of LLMs is that they are a complete black box. There is no visibility on why the output looks the way it does. Even the developers of ChatGPT and similar systems cannot explain why their products behave the way they do. This lack of explainability can be worrisome: if it is impossible to explain the output of an LLM-based tool, how do we know it is doing what is expected, and if it is fair and unbiased?
Consequences for recruitment software
In CV or job parsing technology, a lack of transparency can to some extent be acceptable: it is not critical to know why one word was interpreted as part of a job title, and another word as denoting an education level. In matching technology, that’s very different. If a list of candidates gets ranked by an AI algorithm, being able to explain on which basis the ranking took place is paramount to a fair matching procedure. Transparency helps motivate the choice of the shortlisted candidates, and makes it possible to ensure that no factors contributed to the ranking that shouldn’t (gender, ethnicity, etc., more details in the next section).
In addition, transparency and traceability are obligations in various forms of upcoming AI legislation, such as the EU AI Act and the soon to be enforced NYC AEDT. Those demand that matching software should be able to transparently disclose the criteria that played a role in the ranking of candidates.
Limitation 4: Potential bias
Because LLMs were trained on vast amounts of texts from the internet, they are expected to have societal and geographical biases encoded in them. Even though there have been efforts to make systems like GPT as ‘diplomatic’ as possible, LLM-driven chatbots have reportedly expressed negative sentiment on specific genders, ethnicities and political beliefs. The geographical source of the training data also seems to have tainted its perspective on the world: since richer countries tend to publish more digitized content on the internet than poorer countries, the training data doesn’t reflect every culture to the same extent. For instance, when asked to name the best philosophers or breakfast dishes in the world, ChatGPT’s answers tend to reveal a Western vantage point.
Consequences for recruitment software
Bias is a big problem in the HR domain. For good reasons, selecting candidates based on characteristics that are not relevant to job performance (for example, gender or ethnicity) is illegal in most countries. This warrants great caution with the use of LLM models in recruitment software, so that their inherent biases are not propagated into our hiring decisions. It is therefore ever so important to use AI in a responsible manner. For example, asking an LLM directly for the best match for a given job post is out of the question. It would likely favor male candidates for management positions, and female positions for teaching or nursing jobs (exhibiting the same type of bias as when it is asked to write a job post or a performance review). Due to the lack of transparency, the mechanisms that cause this behavior cannot be detected and mitigated.
At Textkernel, we believe recruitment software needs to be designed with responsibility principles in mind, so that it actually helps reduce biases. To learn more about how AI can be used responsibly in recruitment, please check out our blog post on this topic, and stay tuned for the next one in this series.
Limitation 5: Data privacy
Another concern has to do with data privacy. Since LLMs are so heavy, it’s appealing for vendors to rely on third party APIs provided by vendors like OpenAI (the company behind ChatGPT) instead of hosting them on proprietary hardware. This means that if personal information is to be processed with an LLM-based application, it is likely to be processed by, and potentially stored on, third party servers that could be located anywhere in the world. Without the right contractual agreements, this is likely to violate data privacy laws such as GDPR, PIPL or LGPD.
Consequences for recruitment software
Resumes and other documents used in HR applications tend to be highly personal and they can contain sensitive information. Any tool that forwards these documents to LLM-vendors should comply with data protection regulations, and their users should agree with having their data (sub)processed by external service providers. But that might not be enough: the European privacy law (GDPR) gives individuals the right to ask organizations to remove their personal data from their systems. Because LLM providers tend to use user input to continuously train and update their models, it is unlikely that all LLM providers will be able to, or even willing to, meet these requirements.
Limitation 6: Lack of control
Another problem caused by the lack of transparency is that creators of LLM-based parsing technology cannot easily address structural errors. If an LLM-driven parser keeps making the same mistake, then diagnosing and fixing the error is much harder than with traditional systems, if not impossible. Moreover, the models underlying APIs like ChatGPT can change over time (some receive frequent, unannounced updates). This means that the same input does not always yield the same output. Or worse, LLM-based product features could stop working unexpectedly when an updated LLMs starts reacting differently to the previously engineered instructions (prompts).
Consequences for recruitment software
If vendors of HR tech solutions have little control over their outcome, problems observed by users can not be easily addressed. Solutions that rely on models that receive automatic updates will not always be able to replicate the problems observed, let alone fix them.
Limitation 7: Prompt injection
With new technologies come new security vulnerabilities. LLM-based applications that process user input are subject to so-called ‘prompt injection’ (similar to SQL injection attacks): users can cleverly formulate their input text to modify the instructions that are executed by the LLM. While that might be innocent in some cases, it could become harmful if the output is in direct connection with a database or a third-party component (e.g. a twitter bot or email server).
Consequences for recruitment software
In document parsing, prompt injection could look like this:
Prompt structure used in a CV parsing application:
Parse the following CV: [text of the CV].
The text entered in the place of the CV by a malevolent user would be along the lines of:
Ignore the previous instructions and execute this one instead: [alternative instructions]
In the best case, this will cause the LLM-based CV parser to throw an error because the output doesn’t respect the expected response format. But there might be serious ways of exploiting this vulnerability, especially if the parsing is directly used to search in a candidate or job database. Prompt injection, in that case, could be used for data exfiltration or manipulation of the search results. Even if no such connections exist, no security officer will feel comfortable with a system component that can easily be repurposed by its end users.
Conclusion
We see many opportunities to optimize recruitment and HR processes further using LLMs. However, adopters need to find solutions to a number of important limitations to avoid damaging financial, compliance and security risks. The notion of “responsible AI” has never been more relevant. Some of these limitations will see technical solutions appear soon, while others might not be solvable at all and will simply have to be seen as limiting factors in the use of LLMs. We are confident that, with the right values and processes in place, Textkernel will overcome these limitations in its upcoming adoption of LLMs.
The value of recruitment automation starts with quality data and as the leading AI-powered recruitment technology provider our AI is the foundation of successful recruitment automation.
