Deep Learning is the next big thing in language technology and is considered to be the future of Artificial Intelligence. Deep Learning is a set of algorithms used on large amounts of data to automatically learn how to represent information, such as learning the meaning of new words or recognising shapes in images.
Deep Learning and Natural Language Processing
While Deep Learning has been extremely successful in the field of image recognition, applications in natural language processing are still in their infancy. Textkernel, a specialist in semantic recruitment technology, is one of the first companies in its field to integrate Deep Learning into language technology. “Deep learning gives us the power to quickly develop accurate models without a lot of supervised training“, says Jakub Zavrel, CEO and co-founder of Textkernel. “While our current resume parsing models already obtain very high accuracy rates on average, Deep Learning can help recognition get even more robust and give better results on difficult cases.”
Google and Facebook
Companies like Google and Facebook have been making enormous progress with Deep Learning technology. Google’s systems (“Google Brain”) learned to automatically recognise a cat based on millions of documents, without having been trained to do so. Facebook’s deep-learning researchers recently demonstrated face-processing software that comes close to matching human performance.
In the case of natural language technology, applying Deep Learning looks promising too. In the first stage of its research, Textkernel was able to significantly improve the accuracy of its English and French CV parsing models. Those improvements have just been implemented in its latest cv parsing release. “Deep Learning has allowed us to break free from the limitations of using human annotated data in our machine learning pipeline”, says Mihai Rotaru, Head of R&D of Textkernel. “New knowledge is inferred from large amounts of data and we have already seen this increase the robustness and coverage of our resume parsing models”.
Big data and computing power
While the future of Deep Learning technology looks bright, this had not always been the case. “20 years ago, we were already experimenting with this technology, but other techniques were giving better results”, explains Remko Bonnema, Technical Director and co-founder of Textkernel. “With today’s access to big data and the tremendous increase in processing power, research in Deep Learning technology has really taken off. The possibilities are endless.”
Read more about Deep Learning
Textkernel specialises in multilingual semantic recruitment technology and provides recruiting tools to accelerate the process of matching demand and supply in the job market: multi-lingual CV parsing (available in 15 languages), job parsing and semantic searching, sourcing and matching software. The company was founded in 2001 as a private commercial R&D spin-off of research in natural language processing and machine learning at the universities of Tilburg, Antwerp and Amsterdam. With thousands of customers word-wide, Textkernel now operates internationally as one of the market leaders in its segment.