Recruiting and retaining talented employees is a constant priority among employers. But talent has quickly become among US employers’ scarcest and most valuable commodities. Companies are finding it more difficult to protect their existing talent pool, let alone fill roles that remain open. These trends are broadly putting business performance at risk.

As we approach 2023, labor shortages and resignations continue to impact the US economy and job market. According to the US Bureau of Labor Statistics (BLS), 10.7 million positions remained open in June 2022 while roughly 4.2 million employees resigned; job vacancies remain high as talent demand continues to outweigh supply (US Job Openings from Bloomberg, August 2022)


How are US companies coping with labor shortages?

US companies have had to change course as labor shortages persist. Many companies have shifted some focus from recruiting new employees to retaining their existing talent as they find themselves competing for talent on all fronts.

Among other roles, the demand for talent recruiters is high as a result. There are currently more than 40,000 unique job openings for recruiters in the US, and the demand for recruiters made up 0.4% of the 10.7 million US jobs available in June.

The competitive nature of the labor market also has increased recruiters’ workloads, resulting in a more widespread desire to resign. A recent report shows that 77% of high-ranking recruiters are open to changing jobs (Recruiters Are Burned Out article from Bloomberg, July 2022)


Rising demand for compensation and benefits specialists

Companies now recognize that in the tight labor market, a more specialized approach is necessary. They are aligning their management strategies with labor trends to better retain employees as a result. This includes adjusting internal compensation, benefits, and mobility tactics to retain employees who might otherwise seek out better jobs.

In fact, the demand for compensation and benefits specialists in Q2 2022 rose to 23,500 jobs; an increase of 32% compared to Q2 2021, according to Textkernel’s Jobfeed tool  (Figure 1). Demand for these roles has surpassed demand for other jobs since Q3 2021 as well.

A closer look at the demand for employees in these categories shows that the most in-demand jobs in Q2 2022 were Benefits Manager and Benefits Specialist roles (Figure 2). This reinforces the idea that companies have prioritized benefits as a way to attract and retain talent.

Profession2022-Q2
Benefits Manager8,032
Benefits Specialist4,884 
Compensation Manager3,739 
Benefits Analyst2,352 
Benefits Consultant1,626 
Benefits Coordinator1,215 
Director of Benefits and Compensation919 
Compensation Consultant820 
Comp&Ben Jobs23,587

The highest demand for these benefit roles appear to correlate with high competition of certain industries. The Professional, Scientific, and Technical Services field has the highest demand for compensation and benefits jobs (Figure 3). Followed by the Finance and Insurance industry which likewise has a high number of vacancies for new compensation and benefits staff. 

These industries in particular have a highly skilled labor base that’s difficult to acquire under normal circumstances. This gives skilled workers an even stronger upper hand in demanding better compensation and benefits.  Additional reports show that US companies will raise employee pay by an average of 4.1% over the next year (CFODive, August 2022).

US companies also are taking steps to become competitive by making work more flexible for employees. While 86% of US companies are hiring employees at the higher end of salary ranges to attract talent, 84% of companies also are increasing work location flexibility to help retain workers (wtw, August 2022)


Offering more generous benefits

Leading companies are demonstrating their resolve in these areas. PwC, one of the world’s largest accounting firms, is offering more generous benefits as a result of these challenges. The firm recently announced they will invest $2.4 billion to retain staff and compete amid a shortage of accountants and ongoing turnover. Personnel at PwC now can take 12 weeks of paid parental leave and choose from a more streamlined menu of benefits, among other options (NA Employers Rethinking Work and Reward Programs from Bloomberg Tax, May 2022)

In order to implement these new compensation and benefits policies, PwC has posted a number of job openings online, specially with Compensation and Benefits specialist positions in mind. In fact, this is part of a broader trend. As the accounting industry has been particularly hard hit by labor shortages, all of the Big 4 accounting firms, including PwC, Deloitte, KPMG, and EY, have many openings for Compensation and Benefits specialist positions.


Making data-driven decisions

Organizations challenged with recruiting and retaining talents may benefit by taking action to address the labor trends discussed above. One effective way to accomplish this goal is to use data to better understand the labor market. This approach enables a firm to stay competitive and differentiate itself in the hiring market, retain employees, and attract new candidates.

Summary

Column CVs are visually appealing and are becoming widely used by candidates. We estimate that currently at least 15% of CV documents use a column layout. However, properly dealing with this layout is a surprisingly difficult computer vision problem. Since third party tools do not work well on CVs or are very slow, Textkernel already had a system in place to deal with column layout documents. We have greatly improved this system by applying various AI techniques. As a result, our handling of column CVs in PDF format has improved significantly, resulting in better extraction quality regardless of the document language.

Intro

The first step in an information extraction pipeline is to convert documents into raw text from which information can be extracted.

The system’s ability to perform well in this first step is crucial: any mistake will impact the performance of subsequent steps. Generating a well-rendered text representation for many different types of documents is a difficult problem to solve.

A simple method, that renders the text in a top-down, left to right order is usually sufficient for documents that have a standard layout.

Standard layout CV

However, CVs come in various layouts, which are easy for humans to understand, but can be challenging to a machine.

A common layout we find in CV documents is the usage of columns. Column CVs are visually appealing and widely used by candidates applying for a job.  Candidates want to neatly organize the information in their CV and provide visual structure, for example by having a sidebar that contains their contact information.

If a system were to use the basic left-to-right, top-down order rendering for this type of document, that would generate a rendering where the information from different sections of the CV is mixed together (see image aside).

Instead of reading the columns one after the other, the system would mix bits and pieces of each column together.

Column layout CV

An imperfect text rendering can still be useful for certain tasks: searching for keywords is still possible, and humans can still easily read the document. 

But when automated systems try to extract structured information from an imperfect rendering, problems compound very quickly: finding the correct information becomes incredibly challenging.

At Textkernel, we strive to offer the best parsing quality on the market, which means that the widespread use of column based layouts demands our full attention. Keep reading to follow us on our journey to create a system that can understand creative document layouts and see how we were able to leverage machine learning to bring our Extract! product to the next level.


Our Previous Approach

Our system was already able to handle several types of document layouts, being able to identify sections of a document that should be rendered independently.

The approach has 3 steps. In the first step, the text content of the PDF is scanned and visual gaps between them are identified (see below an example). In the second step, a rule-based system decides whether a visual gap is a column separator or not. As you can see in the example below, not all visual gaps are column separators and the left-to-right reading should not be interrupted for these gaps. Based on these predictions, in the third step the text will be rendered by separating all identified columns. 


Visual gaps (in red) identified in standard layout CV
Visual gaps (in red) identified in column layout CV

A naive approach that always renders the big visual gaps separately would have issues on several types of layouts, as an example a key-value structured layout would break the key from the value and separate it in its text representation, leading to incorrect extraction of fields.

Key-Value structured layout

Visual gaps (in red) in Key-Value structured layout

Our system achieved good rendering for many cases but was still failing to predict certain column separators. By design the system was very precise when predicting that the visual gap is a column separator (i.e. precision of the positive class is very high), the rationale being that predicting a column separator when there is none (i.e. a false positive) is very costly: the rendered text will be wrong and as a result it would affect the parsing quality. In order to achieve this high precision, its coverage was more limited (i.e. precision of the positive class was favored over the recall of the positive class). In addition, the system is also very fast (tens of milliseconds), making it a quite efficient solution.

Improving such a system requires a model centric approach: we have to focus our efforts in changing the code. For example, increasing the coverage of supported cases is very difficult. When we encounter a new case, we need to implement a new rule for it, make sure it is compatible with the rest of the rule base and choose how the rules should be applied and combined. Complexity can grow very high with the more rules we add.

Ideally we would like our solution to be data centric, so we can improve its performance by collecting examples of how the system should perform, and focus our attention on curating and improving the example data. We would also like a solution that preserves our processing speed.


The first improvement trial

We analyzed several third party solutions that might help us improve our system, without going through all the difficulties of managing a rule-based system. 

Most of these systems apply computer vision methods to extract text from an image representation of the document. These require computationally expensive algorithms and are therefore quite slow (i.e. seconds), and also difficult to manage for on-premise installations. We were also surprised to see that their performance was not much better than our previous rule-based approach. Therefore, we abandoned the third party track.

As we are focusing on improving our column handling, we don’t need to identify all the gaps in the text, only the larger vertical visual gaps should correspond to columns. With these simplified assumptions, we came up with a new method to detect the largest vertical visual gap from a histogram of the whitespace in the image representation of the document, as can be seen in the image below.

Whitespace histogram of standard layout CV
Whitespace histogram of column layout CV

Looking at this representation, we can see a distinction between both types of layouts in terms of whitespace distribution, and we used this representation to train a neural network model for classifying between column layouts and regular layouts.

Note that this method does not fit all our requirements: we still don’t have the coordinates needed to separate the column content. In addition, we also noticed the processing speed will be an issue if we continue on this track.

Given the expected effort still to get this method to a usable state, we took a step back and went back to the drawing board.


Our New Approach

We already stated that in our ideal scenario we would be able to improve our system by feeding it good quality data. How can we move from our model centric approach into a data centric approach?

At the core of our solution we have a single type of decision: deciding if a visual gap is separating related or unrelated content (e.g. a column separator). This is a binary classification problem, for which we can train a machine learning model to replicate the decision.

By making use of our rule-based system we can generate our training data by converting our rules into features and our output decision as the label we want our new model to learn. By doing this we can begin to focus on improving the collection and curation of more training data, and easily retrain the model everytime we want to improve it, instead of adding more rules to our code base.

We have a new approach and we need to validate it. For that we follow the model development pipeline:

Model Development Pipeline

Data Selection

We start with selecting the data for training our machine learning model. Unlike a rule-based system that needs a few hundred examples to develop and test the rules, we will need several thousand examples to learn our model.

We started with problematic documents that our customers kindly shared with us in their feedback. However, this set was quite small (about 200 documents). How can we find thousands more column CVs when they only account for about 10-15% of documents? Luckily, from our initial attempt we have a neural network based column classifier. Although not sufficient for replacing our old rule-based system, it’s a great method to mine documents with a column layout. Even if this classifier is not 100% accurate, it is still better than randomly selecting documents (which will have an accuracy of 10-15%). In addition, we also collect a random sample of documents to make sure our method works well across all layouts (i.e. ensure we do not break rendering of correctly working document layouts).


Generation of the Dataset

To generate our dataset we process our document sets through our existing rendering pipeline. For each visual gap, the target label is initially set to the decision made by our rule-based system. We bootstrapped the features by using the variables and rules computed in this decision. In addition, we added several new features that quantify better some of the properties of column layouts.


Manual Annotation

In the previous step we generated a pseudo-labeled dataset: the labels originate from our existing system and are not verified by a human. To ensure that our machine learning model will not simply learn to reproduce the mistakes of the rule-based system, we also manually annotated a small sample of column CVs. Since this is a time consuming task, having potential column CVs as identified by our neural network based column classifier helped to speed up our annotation process.


Model Training

We can now train a machine learning model to mimic our ruled-based system decisions. We started our experiments with the decision tree algorithm. This is a simple algorithm to apply to our dataset and very effective, offering good classification performance while very fast to apply, a key characteristic we wanted in our approach. 

However, decision trees have several problems: they are prone to overfitting and suffer from bias and variance errors. This results in unreliable predictions on new data. This can be improved by combining several decision tree models. Combining the models will result in better prediction performance in previously unseen data.

There are several ways to achieve this, the more popular methods being bagging, where several models are trained in parallel on subsets of the data: an example of such method is the random forest. Another ensemble method is boosting, where models are trained sequentially, each model being trained to correct the mistakes of the previous one: an example of such method is the gradient boosting algorithm.

After testing a few options we settled on the boosting approach using a gradient boosting method.


Efficient Label Correction

Our new model was mostly trained to reproduce the decisions of our rule-based system because most of its training data comes from pseudo-labeled examples. The limited human annotations also makes it difficult to do error analysis and identify which cases the new model is misbehaving.

Even so, the added small sample of manually annotated data for column CV documents can already shift the decision in informative ways. As a result, the discrepancy between the predictions of the new method and the rule-based system can be analyzed manually and corrected. We call this approach delta annotation. This is an effective process of labeling only the data that will push the model into performing better.

At Textkernel we are always looking for ways to deliver the best quality parsing. Having quality data is essential for what we do, so of course, we already have implemented great solutions for this using tools such as Prodigy to facilitate rapid iteration over our data.

Annotation of visual gaps in CV using Prodigy

WIth this partially corrected dataset, we can retrain our model and we can keep iterating and improving our dataset by doing delta annotation between the latest model and the older ones. In our case, two iterations was enough to saturate the differences and reach a good performance at the visual gap level.

This enables us to follow a data centric approach, we can focus on systematically improving our data in order to improve the performance of our model.


Evaluation

We have a new approach that is more flexible than before, but we still have a big challenge. How can we be sure that better decisions at the visual gap level translate in an overall improvement in rendering at the document level (recall that a document can have multiple visual gaps). Even more important, does this translate into extraction quality improvements? If we want to be confident in our solution, we need to evaluate our system at multiple levels.

Firstly, we did a model evaluation to know if we are better at making decisions at the visual gap level. For this, we can simply use our blind test set and compare the performance of our new model with the old model. On more than 600 visual gaps, our new model makes the right decision in 91% of the cases as opposed to only 82% for our old rule-base system. However, visual gaps are not all equally important and some matter more than others: in our case, the visual gaps corresponding to columns are the most important to get right. For this important subset, we see a performance increase from 60% to 82%. In other words, we have more than cut in half the errors we used to make!

Secondly, we looked to see if the improvement in visual gap classification translates into better rendering (recall that in a document there might be multiple visual gaps). In other words, are we doing a better job of not mixing sections in column CVs? However, since multiple renderings can be correct, it is hard to annotate a single “correct” rendering (which would have allowed us to automatically compute rendering performance). Therefore, we had to do a subjective evaluation of the rendering. Using our trustworthy Prodigy tool, we displayed side-by-side the renderings of the new and the old system to our annotators (without them knowing which side is which). The annotators evaluated if the text is now better separated, worse, or roughly the same as before. The results on a set of about 700 CVs are really good: well rendered CVs increased from 62% to 90%.

Finally, we looked to see if better rendering translates in better parsing. We knew that in column CVs where the old system was failing, our parser would sometimes extract less information, in particular contact information like name, phones and address. Thus, the least labor intensive way is to simply check if the fill rates are increasing. On more than 12000 random CVs, we see that the contact information fill rates are increasing by 4% to 10% absolute. But more does not necessarily mean better! Thus, we also invested in evaluating more than 1000 differences between our parser using the old system and our parser using the new system. The results in the figure below show the percentage of errors our new system has fixed. This is our final confirmation that we now have in our hands a better parser! Great job team!

Error reduction in contact information

To summarize our improvements:


Conclusions

Our extraction quality on column CVs is now better than ever. By leveraging machine learning to replace our rule-based system we can now correctly parse an even wider range of CV layouts.

Our main takeaways from this project are:

Don’t miss out on the great candidates that make use of these layouts!



About The Author

Ricardo Quintas has been working for Textkernel for 4 years as the Tech Lead Machine Learning.

Ricardo Quintas

Below you can find links to dedicated product pages including the latest Textkernel releases. Click to select a product that is relevant to you.

Annually for the past 9 years, Textkernel has dedicated a week to innovation and turned into an incubator for internal mini startups. This year, Innovation Week was bigger and better, 10 innovative projects were pitched and approved with almost 100 team members participating in a week full of ideation, cooperation and great entrepreneurial spirit. Great minds connecting! 

The concept of Innovation Week is simple: anyone across the company can pitch an idea during a pitch session. The company members then have several days to vote for their preferred ideas and to indicate their availability to participate. Finally, the team captains select the members of their  team and then set out to create a working proof of concept to present against all the other ideas.

It’s about diversity and disruption

“Innovation Week is about disruption,” says Mihai Rotaru, Head of R&D and co-organizer of the event. “It’s about looking at customer problems from a different point of view, it’s about exploring new technologies, it’s about going out of your comfort zone.” And he adds, “The mix of colleagues from different departments is the core of success in Innovation Week.” This is how teams bring together all the necessary skills and enough variety in perspectives for innovations.

Facilitating a bottom-up culture

Innovation Week has been held every year since 2013 (with a pandemic break in 2020) and it is firmly embedded in Textkernel’s corporate culture. “Innovation Week is important because it brings people together and it allows innovation bottom-up,” says Textkernel CEO Gerard Mulder. And that is felt throughout the entire company. “Knowing that the company cares about our ideas truly has a great impact,” says Hope Natell, Sales Manager North America.

Innovation Week brings people from various backgrounds and different departments together in order to work on a product prototype.

The grand finale 

It is at the end of the week that all ideas are shared with the company, in the form of a product pitch in front of all colleagues. This year, the jury and the Textkernel employees chose Umut Can Ozyar’s project “Journey” as the winner.

Umut describes his team’s winning product as an “AI assistant for more efficiency and a personalized candidate engagement throughout the recruitment process”. His team’s goal was to improve a recruiter’s life with the help of AI-based technology. “Our product can suggest targeted emails, messages, and interview questions unique for each job and candidate, accentuating their best qualities,” explains Umut. And who knows, maybe it will make its way onto the Textkernel roadmap soon.

Alicia Krebs, Doris Hoogeveen, Alexander Antipin, Rasheed Musa, Umut Can Özyar, Sebastiaan Pasterkamp, Panos Alexopoulos & Michael Burggraf won Innovation Week 2022 with their outstanding project “Journey”

After the presentations, Textkernel colleagues from offices around the world – the Netherlands, France, Germany, the United Kingdom and the United States – celebrated the closing of the event. After all, it is not only about the projects but also about a lot of fun and team spirit within the company. Or, as Textkernel CEO Gerard Mulder puts it: “It’s the best event of the year.”

Jobfeed, Textkernel’s labor market intelligence tool, is renowned for its accurate and up-to-date market insights, which can help recruiters make critical talent decisions. It’s not all about optimizing the hiring process, however. Jobfeed data has also been leveraged in the datasets of several research and policymaker institutions in the Netherlands and across the EU. 

At Textkernel, we pride ourselves on developing products that deliver and maintain exceptional accuracy levels. We recognize that Jobfeed data has a critical role to play in helping businesses, educational institutes, non-profits, and government organizations make important decisions. 

That’s why we constantly monitor Jobfeed data. In recent months, we noticed a disparity in our reports in comparison to independent economic indicators. When we started to look into why the numbers were off, we discovered a few interesting job posting trends, which we’ll discuss below. 


A steeper uptrend than expected

Over the past few years, Jobfeed has reported a significant increase in the total number of job openings. That’s not surprising, as labor shortages have made it difficult for recruiters to fill many positions. 

However, the uptrend reported by Jobfeed seemed too steep when compared to independent economic indicators.  While investigating this difference, we realized that the tight labor market had led to a few changes in job posting behavior that were impacting the Jobfeed data, including: 

Both of these developments have led to an increase in the number of job posts identified by Jobfeed. The rise of new aggregators in particular has contributed to this trend, since the many changes aggregators make to job posts when copying them can make them difficult to deduplicate. 

Currently, the smart algorithm we use to deduplicate job posts is identifying on average 5 posts for each unique job in the Netherlands. After examining the new market conditions that arose after Covid-19, however, we realized that the algorithm was sometimes failing to take into account these new tactics. 


Textkernel’s approach

In order for recruiters to implement the best hiring practices, and for research and policymaker institutions to accurately understand employers’ future demand for skills, it’s critical that Jobfeed data reflects the actual job market as closely as possible. That’s why we’re working to take these market changes into account and are in the process of normalizing past labor market data by removing these duplicate job postings. 

Of course, this needs to be done carefully to avoid deduplicating too many posts. To make sure we achieve this, we’ll undertake the following actions:

  1. Remove all jobs from specific aggregators that don’t contribute to the completeness and quality of the data in Jobfeed. 
  2. Identify and remove low-quality content from job boards and other websites, focusing on shortened and/or unreadable text. 
  3. Improve handling of single jobs that are posted for multiple locations on job boards. 

The first action will account for most of the corrections we plan to make. We’ll update all data from 2019 on, and in order to provide immediate improvements, we’ll also remove specific aggregators starting the week of August 15th. The estimated reduction in the number of unique jobs is between 8% in 2019 and 23% in 2022. 

While we’re currently investigating the best approach for the remaining two actions, we expect their impact on the normalization of the data to be smaller, since aggregators appear to be causing the bulk of the issues. 

Once the improvements are made, they’ll automatically become visible in the Jobfeed Portal and API, giving you immediate access to the most up to date data. If you have a data feed you’re using and would like to receive a refresh of the data, please contact your account manager for help. 

Please feel free to reach out to marketing@textkernel.nl if you have any questions. Accuracy is the cornerstone of our service, and we remain committed to delivering the best solutions and improving our data every way we can.

Textkernel has made a strategic combination with best-of-breed staffing app and portal specialist Akyla.

Akyla marks the second step in the international buy-and-build strategy of Textkernel since its management teamed up with strategic software investor Main Capital Partners (“Main”) in October 2020. Last year, Textkernel successfully acquired U.S-based competitor Sovren to solidify the group’s position as a global market leader in AI-driven parsing and search-&-match technology. 

Like Textkernel, Akyla is considered a true best-of-breed solutions provider in the HR software market. Akyla is a provider of flexible mid-office working platform solutions that enable automated recruitment, selection and efficient management of flex workers. The company offers two innovative solutions (e-UUR and Xplican) that assist customers with administrative processes involved in the management of flex workers, including onboarding, hourly registration, time interpretation, digital signing and vendor management.

The organizations foresee opportunities for a strong and unique combined product proposition that will competitively position the combination in the market. Notably, by gathering richer and more actionable data, the combination will improve the effectiveness of the search & match algorithms of Textkernel and empower staffing organizations to more effectively match candidates and jobs at the right time automatically. Candidates will enjoy a more tailored and suitable offering of potential jobs, which should lead to higher redeployment and placement rates for staffing agencies and a higher degree of job satisfaction and employee productivity for flex workers, while lowering the sourcing costs and efforts of staffing agencies.

Together, Akyla and Textkernel serve a combined customer base of more than 2,500 organizations, including staffing organizations, payrollers, corporates, job boards, HR solutions providers and other participants in the broader HR market.

Martin Schievink, CEO of Akyla, is excited to join forces with the internationally experienced Textkernel team and looking forward to the cooperation: “Textkernel is an excellent strategic match for Akyla. We share similar cultures and ambitions to help staffing organizations around the globe with our propositions.”

Gerard Mulder, CEO of Textkernel, foresees a fruitful strategic partnership with strong potential to offer a value-added proposition together with Akyla to staffing organizations and software partners across international markets.

While exploring the opportunity for cooperation the response to our ideas from customers and partners were nothing but extremely positive. That feedback, combined with our very similar cultures and go-to-market strategy and Akyla’s wish to become more internationally active, strengthened our belief that joining forces will accelerate the growth of both companies significantly.

Gerard Mulder, CEO of Textkernel

Main Capital has long been in contact with the leadership of Akyla and envisions a productive strategic combination that could bring sustainable competitive advantage, according to Pieter van Bodegraven, Partner at Main and Chair of the Supervisory Board of Textkernel: “We strongly believe in putting together driven and passionate entrepreneurs to accelerate innovation for the benefit of their clients. Over the past 20 years, this has been a key value creation driver in the successful organic and buy-and-build growth strategies we have executed together with our business partners. With Akyla and Textkernel, we combine two organizations that are both renowned for their skills and expertise within their respective adjacent domains of the HR ecosystem.”

For more information and an FAQ about the acquisition, visit our website.  

About us

Textkernel

Textkernel is an international leader in AI-driven solutions for parsing, data enrichment and matching people and jobs. Textkernel enables thousands of recruitment & staffing agencies, employers, job boards, HR software vendors and outplacement & redeployment agencies worldwide to work smarter and more effectively by creating efficiencies in the HR and recruitment process. Textkernel is headquartered in Amsterdam, with satellite offices in Frankfurt, Paris and the United States. Including Akyla, the group employs ca. 175 people.

Akyla

Akyla is a provider of flexible mid-office working platform solutions that enable automated recruitment, selection and efficient management of flex workers. The company offers innovative solutions that assist customers with all administrative processes involved in the management of flex workers. Headed by its co-founders, Akyla’s ca. 30 employees serve a loyal customer base of more than 200 staffers, payrollers and HR services providers across the Benelux and Nordics regions.

Main Capital Partners

Main Capital Partners is a leading software investor in the Benelux, DACH and the Nordics. Main has almost 20 years of experience in strengthening software companies and works closely together with management teams of its portfolio companies as a strategic partner, in order to realise sustainable growth and build excellent software groups. Main counts over 45 employees and has offices in The Hague, Stockholm, and Düsseldorf. In October 2021 Main has over 2.2 billion euros under management and invested in more than 120 software companies. These companies create jobs for approximately 4,000 employees.

We have exciting news to share!

Textkernel acquires Akyla!  We have compiled a list of questions that we anticipate that our customers, partners and other stakeholders in Akyla may have.

What is the news?

Textkernel is proud to announce that it has acquired Akyla, a Dutch-software company that develops and licences time-management, onboarding and compliance solutions for the staffing market.

Who is Akyla?

Akyla B.V. is a Dutch-software company founded in 1999 by Martin Schievink and Bart van Borssum Waalkes, located in Enschede, in the Netherlands.

Why is Textkernel acquiring Akyla?

In today’s tight labour market, one of the biggest challenges facing Staffing agencies is attracting and retaining talent. As a company, our focus is primarily on attracting and recruiting talent better

The acquisition of Akyla allows us to extend our Staffing solutions to deliver more value to our customers, beyond the recruitment stage. As a combined business, Akyla and Textkernel will be able to offer solutions that better remove friction in recruitment, onboarding, time management and retaining of employees on assignment and more effectively help in redeploying talent. 

By bringing Aklya into the Textkernel business, we are able to help Aklya expand internationally more rapidly, and offer their technology in multiple languages to our customers. In return, the access to wider labour data insights will support our combined future product development roadmap. 

The existing synergies between the companies were obvious and we share a focus on accuracy, reliability and innovation. In addition, we share mutual relationships with other software vendors, and our ways of work and how we make our products available to the market are very similar. 

In what regions does Akyla operate?

Akyla has customers in the Netherlands, Belgium and the Nordics.

What’s the added business value of combining Akyla and Textkernel?

The combined business allows us to achieve our growth potential faster. Together we are able to combine our expertises to better service our customers’ growing needs and we have increased access to wider labour data and insights.  

How does the acquisition benefit Textkernel customers?

Staffing agencies are looking for automated and integrated solutions to help them succeed in the tight labour market. For customers that have both Textkernel and Aklya solutions, they will enjoy a truly integrated solution that combines the best of both worlds with deeper insights into the labour market and a combined product offering. 

For exclusively Textkernel customers, they will have access to an expanded product solution from Textkernel, which combines our capabilities of turning data into knowledge with Akyla’s vast data access. We expect that our customers will enjoy better matching results that are tailored to their processes, and because we have access to data that lives outside of their front office application, will ensure more relevant matches at the right time.

Additionally, we will be able to help them automate their redeployment process by pulling candidate data from the back office data sources, to ensure higher quality service to their end-customers and employees, quicker and better placements, higher redeployment rates and lower sourcing costs.

How does this acquisition benefit Akyla customers?

Staffing agencies are looking for automated and integrated solutions to help them succeed in the tight labour market. For customers that have both Textkernel and Aklya solutions, they will enjoy a truly integrated solution that combines the best of both worlds with deeper insights into the labour market and a combined product offering.

For exclusively Akyla customers, they can look forward to a wider range of solutions from Akyla, starting with Textkernel solutions that seamlessly integrate with Akyla and then extending to our full best-in-class range of solutions. 

The acquisition also allows us to better support international customers, or customers with multinational ambitions, while ensuring product enhancements and a combined product roadmap.  

Additionally in the combined solution, we will be able to help them automate their redeployment process by pulling candidate data from the back office data sources, to ensure higher quality service to their end-customers and employees, quicker and better placements, higher redeployment rates and lower sourcing costs.

Are there any planned organisational changes?

No! Akyla will work as an operating unit within the Textkernel Group for the immediate future. Akyla customers will continue to be serviced by existing Akyla contacts and Textkernel will be adding highly qualified people to make sure both Akyla and Textkernel customers can expect the best possible service going forward. 

Will the Akyla management team remain with Textkernel?

Yes! All Akyla employees, including those holding managerial positions and the founders, will remain with the company. 

What about the security of your solutions?

Textkernel has reviewed Akyla’s security policies and have concluded that their security is at a good level. Textkernel will support Akyla in becoming ISO27001 certified and further improve their IT security standards. 

How can I buy Akyla or Textkernel solutions?

Please reach out to your account manager who will be happy to assist you with purchasing Akyla or Textkernel Solutions.

Should you have any other questions, do not hesitate to reach out to your account manager, marketing, or support through the usual channels.