Speed and cost - The Need for Efficiency
LLMs are computationally expensive, requiring significant processing power and time for tasks like parsing and matching. This poses challenges for organizations dealing with large volumes of documents, as processing latencies could lead to significant waiting times and costs. While the limitations of speed and cost are expected to decrease over time, it remains crucial to address these challenges for efficient document processing.
Hallucination - Beware of the Factual Pitfalls
TLLMs sometimes produce text that may contain factually incorrect bits of information, often referred to as ‘hallucinations’. In the context of CV parsing, this could result in the output containing information not present in the original document. Such inaccuracies can create confusion and impact job recommendations.
Lack of transparency - the Big Black Box
LLMs are considered black boxes as their output lacks transparency, making it difficult to understand their decision-making process. This lack of explainability raises major concerns about the fairness and bias in the output generated by LLM-based tools. This is particularly concerning in the light of upcoming legislation around the use of AI (EU AI Act, NYC AEDT), which demand transparent disclosure of ranking criteria in AI algorithms
Potential bias - Keeping an Eye on Diversity, Inclusion and Equality
LLMs trained on internet text data can inherit societal and geographical biases. Biased responses and perspectives can affect LLM-driven recruitment software, potentially violating legal requirements against discriminatory practices. To prevent the propagation of biases in hiring decisions, caution must be exercised when using LLM models. Responsible use of AI in recruitment is crucial to mitigate potential bias and promote fairness.
Data privacy – Safeguarding Confidentiality
The heavyweight nature of LLMs often leads companies to rely on third-party APIs, raising concerns about data privacy. Personal information processed with LLM-based applications may be stored on external servers, potentially violating privacy laws. Even if users give consent, it’s unclear whether LLM creators can ever really erase the personal data that has been used for continuous training and updating of these models.
Lack of control – on Shaky Ground
The lack of transparency and the evolving nature of LLM models make it difficult for developers to address structural errors or unexpected changes in behavior (see example of how ChatGPT accuracy going down). Troubleshooting and fixing issues become challenging or even impossible.
Prompt injection – Guarding Against Manipulation
LLM-based applications are susceptible to prompt injection attacks, where users manipulate input text to modify LLM instructions. This vulnerability poses security risks, especially when LLM applications are directly linked to candidate or job databases.