Large Language Models for Recruitment: Beware of the pitfalls, data breaches, security threats, potential bias, and more

Large Language Models for Recruitment: Beware of the pitfalls, data breaches, security threats, potential bias, and more

LLM blog series – Part 2 | By Mihai Rotaru & Kasper Kok

Download the whitepaper now!

Samsung, a global tech giant, banned the use of ChatGPT after its employees inadvertently revealed sensitive information to the chatbot. This unfortunate incident serves as a stark reminder of the potential pitfalls of adopting new tools without a careful consideration. Our previous post in this blog series envisions that new tools like ChatGPT and LLMs (Large Language Models) have the potential to greatly impact recruitment technology. However, their adoption in HR software presents several challenges. Here we highlight seven limitations and risks associated with using LLMs in the context of recruitment and HR technology.

Limitation 1

Speed and cost - The Need for Efficiency

LLMs are computationally expensive, requiring significant processing power and time for tasks like parsing and matching. This poses challenges for organizations dealing with large volumes of documents, as processing latencies could lead to significant waiting times and costs. While the limitations of speed and cost are expected to decrease over time, it remains crucial to address these challenges for efficient document processing.

Limitation 2

Hallucination - Beware of the Factual Pitfalls

TLLMs sometimes produce text that may contain factually incorrect bits of information, often referred to as ‘hallucinations’. In the context of CV parsing, this could result in the output containing information not present in the original document. Such inaccuracies can create confusion and impact job recommendations.

Limitation 3

Lack of transparency - the Big Black Box

LLMs are considered black boxes as their output lacks transparency, making it difficult to understand their decision-making process. This lack of explainability raises major concerns about the fairness and bias in the output generated by LLM-based tools. This is particularly concerning in the light of upcoming legislation around the use of AI (EU AI Act, NYC AEDT), which demand transparent disclosure of ranking criteria in AI algorithms

Limitation 4

Potential bias - Keeping an Eye on Diversity, Inclusion and Equality

LLMs trained on internet text data can inherit societal and geographical biases. Biased responses and perspectives can affect LLM-driven recruitment software, potentially violating legal requirements against discriminatory practices. To prevent the propagation of biases in hiring decisions, caution must be exercised when using LLM models. Responsible use of AI in recruitment is crucial to mitigate potential bias and promote fairness.

Limitation 5

Data privacy – Safeguarding Confidentiality

The heavyweight nature of LLMs often leads companies to rely on third-party APIs, raising concerns about data privacy. Personal information processed with LLM-based applications may be stored on external servers, potentially violating privacy laws. Even if users give consent, it’s unclear whether LLM creators can ever really erase the personal data that has been used for continuous training and updating of these models.

Limitation 6

Lack of control – on Shaky Ground

The lack of transparency and the evolving nature of LLM models make it difficult for developers to address structural errors or unexpected changes in behavior (see example of how ChatGPT accuracy going down). Troubleshooting and fixing issues become challenging or even impossible.


Limitation 7

Prompt injection – Guarding Against Manipulation

LLM-based applications are susceptible to prompt injection attacks, where users manipulate input text to modify LLM instructions. This vulnerability poses security risks, especially when LLM applications are directly linked to candidate or job databases.


While LLMs offer great potential for optimizing recruitment and HR processes, addressing limitations is crucial to mitigate risks and ensure responsible AI use. While some limitations may have technical solutions, others may remain as inherent constraints. Through the implementation of appropriate values and processes, Textkernel is committed to overcoming them in its upcoming adoption of LLMs. Stay tuned for our next blog describing how we are already adopting LLMs and what are some of our future plans.

For more detailed insights into the limitations mentioned here and their consequences for recruitment software, check out the extended version of this article below!

The result

Download the extended version about the consequences for recruitment software: Seven limitations of Large Language Models (LLMs) in recruitment technology