
Jan 2025 – Navigating the job market with AI: tools, benefits and challenges
AI is revolutionising the recruitment lifecycle, enhancing how organisations attract, assess, and select candidates. We have reviewed how it’s being used, the future direction of AI-driven employment decisions and the consequential challenges for employers.
How AI is being used in the recruitment life cycle
- Advertising: Algorithms can be used from the very start of the recruitment process to target job advertisements. LinkedIn, for example, uses an algorithm to match applicants with skills and other job requirements based on information available on their profile. AI tools have also been developed to help combat gender-biased language in job advertisements.
- Screening: Algorithms can then be used for initial screening of CVs in order to shortlist candidates. By way of example, Textkernel uses algorithmic reasoning to interpret information from CVs and job descriptions to filter, rank and match candidates based on the similarity of their previous experience to the requirements in the job description.
- Assessment: At the next stage, AI can be used to score assessments completed by candidates to measure job-relevant traits, skills, and competencies. This can be through chatbot interviews, video interviews and even gaming scenarios.
In terms of who is using this technology, research indicates that AI is currently mainly being used by very large organisations. The platforms are often (but not always, e.g., Greece-based Bryq) from US technology companies and US owned corporations are leading the way in their adoption. And their use is on the rise. In April 2024, SHRM, the US human resources body, surveyed its members, and 1 in 4 reported using AI to support HR-related activities including hiring. Additionally, a 2024 study by Harvard Business Review Analytical Services found that 52% of respondents had some automation in the recruitment process.
Beyond recruitment
Although, to date, AI has been used most in recruitment, this is beginning to change. Algorithms are being used for internal mobility to identify employees best placed for opportunities which arise. An example is Fuel50, one of the few non-US companies in this market – it is based in Auckland, New Zealand. Fuel50 creates a map of skills across the organisation and automatically identifies and matches individuals to open roles based on their skills, rather than their current role.
Algorithms are used to allocate work most notably in the platform economy by companies such as Uber. They can be used in setting pay and assessing performance including for promotions, redundancy selection or bonuses. Beqom’s talent intelligence is an example of machine learning being used in setting pay, with algorithms being used to optimise budgets, objectives, and pay equity. Zavvy AI is an example of AI being used to provide data-driven performance management.
Extending the use of AI from the hiring process to career and compensation decisions is likely to result in an increase in legal claims. There will be increased pressure to move towards AI-specific rules, which balance the undoubted potential benefits of AI in supporting employment decisions with adequate protection for the subjects of those decisions. This is something that is already occurring in the US: New York City Local Law requires independent bias audits of automated employment decision tools used for promotions or hiring.

Benefits and Challenges of AI
Benefits
The potential benefits of AI-driven workplace decisions are obvious. There is some evidence that not only does AI result in faster and more efficient decisions, saving employers considerable sums, the decisions can be better and less discriminatory. Indeed, human biases are very difficult to overcome, even with targeted training. However, bias in algorithms can be audited and mitigated.
Moreover, unlike human-decisions, AI profiling can be “locally explainable” in that the factors and weighting applied in scoring candidates is potentially available, identifying the true basis on which decisions have been made and where unfair discrimination might have crept into output.
Challenges
Harnessing these potential benefits does mean recognising and addressing the challenges.
Use of AI by candidates: It’s widely recognised that applicants are using generative AI tools to help them pull together a CV, write a covering letter, answer written questions and even generate headshots. Headshotpro is an example of the latter. However, the risk is that this results in generic applications that do not stand out, and also that employers fail to get a true picture of a candidate who may be abusing the system. There is also a risk that over-sceptical employers may incorrectly accuse applicants of using AI in their applications and screen them out, potentially resulting in them missing out on top talent. However, non-text-based assessment methods, such as game-based assessments or work sample tests, are much more resistant to faking.
AI safety: key among concerns is the risk of bias and discrimination in decisions, and AI usage breaching data privacy laws. Despite the range of jurisdictions across which regulation is being developed, common mechanisms to address this risk have emerged. These include mandatory auditing and monitoring process, the ability to challenge decisions, and the need to undertake impact assessments. The risks of legal claims will increase as AI use extends more and more beyond recruitment. AI safety in recruitment has also been the focus of government guidance.
Trust: attracting and retaining the best people requires the maintenance of trust. Employers using AI to drive employment decisions whether in recruitment or particularly decisions about compensation and careers will need to persuade their employees of the advantages, and this requires transparency and consultation. AI laws are increasingly imposing transparency and notification requirements to inform candidates and employees that automated tools are being used to assess and monitor them, but some vendors, such as HireVue and Beamery, have gone a step further and published AI explainability statements that go into detail about how their tools work and the technologies they rely on.
Looking to the future
The integration of AI into the recruitment lifecycle and broader employment decisions presents both significant opportunities and challenges. As the technology continues to evolve and its use in employment decisions expands, the legal landscape is likely to adapt accordingly. The potential for future regulation under the new Labour government is high, particularly given the increasing pressure to ensure that AI-driven decisions are fair, transparent, and accountable. This regulatory evolution will be crucial in balancing the benefits of AI with the protection of individual rights, ensuring that the technology is used responsibly and ethically in the workplace.
If you have any specific questions you would like advice on or if you would like information about what is discussed in this article, then please contact: Abi.Frederick@lewissilkin.com of Lewis Silkin LLP.