May 2023 – AI in the workplace: mind the regulatory gap?
The world of work looks set to be revolutionised by AI technology, with recent research suggesting that up to 300 million full time jobs globally could become automated. Yet strong voices of caution are sounding about the pace of change. Are legislatures stepping up to fill the regulatory gap? And what are the considerations for employers looking to step in and codify employees’ use of new technology themselves?
The end of March saw an unfortunate clash of approaches on the question of regulation: on the same day as the UK government published its pro-innovation (read, regulation-lite) White Paper on AI, the Future of Life Institute published an open letter calling for the development of the most powerful AI systems to be paused to allow for the dramatic acceleration of robust AI governance systems.
The UK government’s White Paper is unlikely to satisfy this letter’s request. The approach proposed is to empower existing regulators through the application of a set of overarching principles. No new legislation or statutory duty of enforcement is proposed as yet. This sits in stark contrast with the EU, which proposes the introduction of more stringent regulation.
AI, and specifically generative AI has shot into the forefront of public consciousness since the launch of ChatGPT last November. Generative AI is now freely available to use and could have many beneficial uses in the workplace. However, in the absence of clear rules from legislatures, employers would be wise not to leave the day-to-day use of this technology in their workplace to chance. We consider below the key issues and what could usefully be addressed in an AI policy.
Management by algorithm – the TUC’s concerns
Calls for stricter oversight of such developing technologies in the UK workplace have also recently been sounded by the TUC. The TUC argues that AI-powered technologies are now making “high risk, life changing” decisions about workers’ lives – such as decisions relating to performance management and termination. Unchecked, it cautions that the technology could lead to greater discrimination at work.
The TUC is calling for a right of explanation to ensure that workers can understand how technology is being used to make decisions about them, and the introduction of a statutory duty for employers to consult before new AI is introduced.
Legal guardrails: existing and potential
A focus of one TUC report was an analysis of the legal implications of AI systems in the post-pandemic workplace, bearing in mind the use of AI and ADM (automated decision-making) to recruit, monitor, manage, reward and discipline staff had proliferated. The report identified the extent to which existing laws already regulate the use of this and what the TUC felt were significant deficiencies that need to be filled.
For example, the common law duty of trust and confidence arguably requires employers to be able to explain their decisions and for those decisions to be rational and in good faith. In terms of statutory rights, protection against unfair dismissal, data protection rights and the prohibition of discrimination under the Equality Act (amongst other things) all have relevance to how this technology is used at work. However, the 2021 report went on to identify 15 “gaps” if AI systems in the workplace are to be regulated by existing laws and made a number of specific recommendations for legislative changes in order to plug these perceived shortcomings. For example, it proposed the introduction of a requirement that employers provide information on any high-risk use of AI and ADM in section 1 employment particulars. But, as we will go on to consider, the approach taken by the government in the White Paper means that any such “plugs” are likely to be far from watertight.
The AI Whitepaper
The government describes the approach it is taking to the regulation of AI in the White Paper published last month as a “proportionate and pro-innovation regulatory framework”.
In summary, this approach is openly light touch and cautions against a rush to legislation, which might place undue burdens on businesses. What is being proposed is instead a principles-based strategy, identifying 5 principles to “guide and inform” the responsible development and use of AI. These are:
- Safety, security, and robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
Existing regulators are expected to implement these values through existing laws and regulations, taking an approach that is suitable for their specific sector.
With no new clear rules, employers could be left unclear as to how the principles-based approach will impact on their proposed use of AI at work.
EU Regulation
The regulation of AI at an EU level proposes a tougher line, with the Artificial Intelligence Act, currently under discussion in the European Parliament, described as the world’s most restrictive regime on the development of AI. This would take a risk-based approach to regulation, with non-compliance subject to potentially significant fines.
In summary, the AI Act proposes a categorisation system which determines the level of risk different AI systems could pose to fundamental rights and health and safety. Restrictions imposed on the technology depend on which of the four risk tiers – unacceptable, high, limited and minimal – the technology is placed in.
Among the list of high risk uses includes some recruitment and employment use cases. An example of this technology might be a CV scanning tool, or AI driven performance management tools. These use cases would then be subject to a range or more detailed compliance requirements. These include the need for:
- a comprehensive risk management system
- relevant, representative and accurate data to train and validate the system
- transparency
- human oversight
Of course the UK is no longer directly bound by new EU regulation such as this but UK businesses will not be beyond the reach of this regulation.
For now, there is clear anxiety over the current level of regulation. Until the AI Act takes effect, perhaps we will see other countries following Italy’s position and temporarily blocking ChatGPT due to privacy concerns.
Time for a ChatGPT policy?
Employers should ensure that they themselves understand how AI is being used in their organisation.
Focussing on generative AI technology, the fact this is now readily accessible for individual use means that use of this technology in the workplace could easily go unnoticed. Workplace policies regulating the use of technology such as mobile phones, social media or third-party systems are commonplace; extending these to cover when and how programmes such as ChatGPT should be used at work makes sense.
In that case, what are the key risks to address in a generative AI policy that defines acceptable use?
- What it’s used for: What is and isn’t acceptable will depend on the nature of the work and workplace, but clear guidelines would be beneficial. Controlling the use of generative AI is not just a consideration for existing employees but job applicants too. Recognising this risk, Monzo has taken the “pre-emptive and precautionary” measure of warning candidates that applications using external support – including ChatGPT – will be disqualified.
- Deskilling: Even if the generative AI can undertake a task previously done by a person, is this desirable from a skills perspective or is there a risk of staff deskilling?
- Confidentiality: As the generative AI systems are constantly learning, there could be risks in inputting confidential information into an open system, including data protection risks. In part, this risk arises because the inputted data could be stored in the system’s memory and then be potentially accessible to third parties or be accessed by the AI model itself at a later time. This could particularly be a risk if ChatGPT were being used for HR matters, for example.
- Copyright infringement: Employers should consider the risk that the system might use material that is protected by copyright which could impact on how any AI generated output can be used. The ownership of content created by AI is a complex issue that we looked at here.
- Accuracy: Whilst the technology is astonishing, a human filter is still essential, particularly in an employment context. A policy can address checking output for accuracy, bias and suitability for the specific context. This is likely to be more important in the employment context where the human impact of a decision can be high and also where those more intangible human and context factors (such as ethics and empathy) are often so important. It’s also important to remember that generative AI is designed to produce the most plausible output and that is not necessarily the same as that which is the most truthful or accurate.
Whilst the positives of this technology can be usefully embraced, a tailored policy will ensure it’s on the employer’s terms. At a time when the regulation of the technology more generally has been described as “little more than waving a small red flag at an accelerating train”, this could be critical.
If you have any specific questions you would like advice on, then please contact: Abi.Frederick@lewissilkin.com or koichiro.nakada@lewissilkin.com of Lewis Silkin LLP.