{"id":1967,"date":"2023-05-25T10:22:36","date_gmt":"2023-05-25T09:22:36","guid":{"rendered":"https:\/\/www.centrepeople.com\/jp\/article\/?p=1967"},"modified":"2025-07-03T23:25:20","modified_gmt":"2025-07-03T22:25:20","slug":"legal-article-202305","status":"publish","type":"post","link":"https:\/\/www.centrepeople.com\/jp\/article\/legal-article-202305\/","title":{"rendered":"May 2023 &#8211; AI in the workplace: mind the regulatory gap?"},"content":{"rendered":"\n<p><strong>The world of work looks set to be revolutionised by AI technology, with recent research suggesting that up to 300 million full time jobs globally could become automated. Yet strong voices of caution are sounding about the pace of change. Are legislatures stepping up to fill the regulatory gap? And what are the considerations for employers looking to step in and codify employees\u2019 use of new technology themselves?<\/strong><\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<p>The end of March saw an unfortunate clash of approaches on the question of regulation: on the same day as the&nbsp;UK government published its pro-innovation&nbsp;(read, regulation-lite) White Paper on AI, the&nbsp;Future of Life Institute published an open letter&nbsp;calling for the development of the most powerful AI systems to be paused to allow for the dramatic acceleration of robust AI governance systems.<\/p>\n\n\n\n<p>The UK government\u2019s White Paper is unlikely to satisfy this letter\u2019s request. The approach proposed is to empower existing regulators through the application of a set of overarching principles. No new legislation or statutory duty of enforcement is proposed as yet. This sits in stark contrast with the EU, which proposes the introduction of more stringent regulation.<\/p>\n\n\n\n<p>AI, and specifically generative AI has shot into the forefront of public consciousness since the launch of ChatGPT last November. Generative AI is now freely available to use and could have many beneficial uses in the workplace. However, in the absence of clear rules from legislatures, employers would be wise not to leave the day-to-day use of this technology in their workplace to chance. We consider below the key issues and what could usefully be addressed in an AI policy.<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3><strong>Management by algorithm \u2013 the TUC\u2019s concerns<\/strong><\/h3>\n\n\n\n<p>Calls for stricter oversight of such developing technologies in the UK workplace&nbsp;have also recently been sounded by the TUC. The&nbsp;TUC argues&nbsp;that AI-powered technologies are now making \u201chigh risk, life changing\u201d decisions about workers\u2019 lives \u2013 such as decisions relating to performance management and termination. Unchecked, it cautions that the technology could lead to greater discrimination at work.<\/p>\n\n\n\n<p>The TUC is calling for a right of explanation to ensure that workers can understand how technology is being used to make decisions about them, and the introduction of a statutory duty for employers to consult before new AI is introduced.<\/p>\n\n\n\n<p><strong><span style=\"text-decoration: underline;\">Legal guardrails: existing and potential<\/span><\/strong><\/p>\n\n\n\n<p>A focus of one&nbsp; TUC report&nbsp;was an analysis of the legal implications of AI systems in the post-pandemic workplace, bearing in mind the use of AI and ADM (automated decision-making) to recruit, monitor, manage, reward and discipline staff had proliferated. The report identified the extent to which existing laws already regulate the use of this and what the TUC felt were significant deficiencies that need to be filled.<\/p>\n\n\n\n<p>For example, the common law duty of trust and confidence arguably requires employers to be able to explain their decisions and for those decisions to be rational and in good faith. In terms of statutory rights, protection against unfair dismissal, data protection rights and the prohibition of discrimination under the Equality Act (amongst other things) all have relevance to how this technology is used at work. However, the 2021 report&nbsp;went on to identify 15 \u201cgaps\u201d if AI systems in the workplace are to be regulated by existing laws&nbsp;and made a number of specific recommendations for legislative changes in order to plug these perceived shortcomings. For example, it proposed the introduction of a requirement that employers provide information on any high-risk use of AI and ADM in section 1 employment particulars. But, as we will go on to consider, the approach taken by the government in the White Paper means that any such \u201cplugs\u201d are likely to be far from watertight.<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3><strong>The AI Whitepaper<\/strong><\/h3>\n\n\n\n<p>The government describes the approach it is taking to the regulation of AI in the&nbsp;White Paper published last month&nbsp;as a &#8220;proportionate and pro-innovation regulatory framework&#8221;.<\/p>\n\n\n\n<p>In summary, this approach is openly light touch and cautions against a rush to legislation, which might place undue burdens on businesses. What is being proposed is instead a principles-based strategy, identifying 5 principles to \u201cguide and inform\u201d the responsible development and use of AI. These are:<\/p>\n\n\n\n<ul><li>Safety, security, and robustness<\/li><li>Appropriate transparency and explainability<\/li><li>Fairness<\/li><li>Accountability and governance<\/li><li>Contestability and redress<\/li><\/ul>\n\n\n\n<p>Existing regulators are expected to implement these values through existing laws and regulations, taking an approach that is suitable for their specific sector.<\/p>\n\n\n\n<p>With no new clear rules, employers could be left unclear as to how the principles-based approach will impact on their proposed use of AI at work.<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3><strong>EU Regulation<\/strong><\/h3>\n\n\n\n<p>The regulation of AI at an EU level proposes a tougher line, with the Artificial Intelligence Act, currently under discussion in the European Parliament,&nbsp;described as the world\u2019s most restrictive regime on the development of AI. This would take a risk-based approach to regulation, with non-compliance subject to potentially significant fines.<\/p>\n\n\n\n<p>In summary, the AI Act proposes a categorisation system which determines the level of risk different AI systems could pose to fundamental rights and health and safety. Restrictions imposed on the technology depend on which of the four risk tiers \u2013 unacceptable, high, limited and minimal \u2013 the technology is placed in.<\/p>\n\n\n\n<p>Among the list of high risk uses includes some recruitment and employment use cases. An example of this technology might be a CV scanning tool, or AI driven performance management tools. These use cases would then be subject to a range or more detailed compliance requirements. These include the need for:<\/p>\n\n\n\n<ul><li>a comprehensive risk management system<\/li><li>relevant, representative and accurate data to train and validate the system<\/li><li>transparency<\/li><li>human oversight<\/li><\/ul>\n\n\n\n<p>Of course the UK is no longer directly bound by new EU regulation such as this but UK businesses will not be beyond the reach of this regulation.<\/p>\n\n\n\n<p>For now, there is clear anxiety over the current level of regulation. Until the AI Act takes effect, perhaps we will see other countries following Italy\u2019s position and temporarily&nbsp;blocking ChatGPT due to privacy concerns.<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h3><strong>Time for a ChatGPT policy?<\/strong><\/h3>\n\n\n\n<p>Employers should ensure that they themselves understand how AI is being used in their organisation.<\/p>\n\n\n\n<p>Focussing on generative AI technology, the fact this is now readily accessible for individual use means that use of this technology in the workplace could easily go unnoticed. Workplace policies regulating the use of technology such as mobile phones, social media or third-party systems are commonplace; extending these to cover when and how programmes such as ChatGPT should be used at work makes sense.<\/p>\n\n\n\n<p>In that case, what are the key risks to address in a generative AI policy that defines acceptable use?<\/p>\n\n\n\n<ul><li><strong>What it\u2019s used for<\/strong>: What is and isn&#8217;t acceptable will depend on the nature of the work and workplace, but clear guidelines would be beneficial. Controlling the use of generative AI is not just a consideration for existing employees but job applicants too. Recognising this risk,&nbsp;Monzo has taken the \u201cpre-emptive and precautionary\u201d measure&nbsp;of warning candidates that applications using external support \u2013 including ChatGPT \u2013 will be disqualified.<\/li><\/ul>\n\n\n\n<ul><li><strong>Deskilling<\/strong>: Even if the generative AI can undertake a task previously done by a person, is this desirable from a skills perspective or is there a risk of staff deskilling?<\/li><\/ul>\n\n\n\n<ul><li><strong>Confidentiality<\/strong>: As the generative AI systems are constantly learning, there could be risks in inputting confidential information into an open system, including data protection risks. In part, this risk arises because the inputted data could be stored in the system\u2019s memory and then be potentially accessible to third parties or be accessed by the AI model itself at a later time. This could particularly be a risk if ChatGPT were being used for HR matters, for example.<\/li><\/ul>\n\n\n\n<ul><li><strong>Copyright<\/strong>&nbsp;<strong>infringement<\/strong>: Employers should consider the risk that the system might use material that is protected by copyright which could impact on how any AI generated output can be used. The ownership of content created by AI is a complex issue that&nbsp;<a href=\"https:\/\/technology.lewissilkin.com\/post\/102i6w2\/ai-101-who-owns-the-output-of-generative-ai\" target=\"_blank\" rel=\"noreferrer noopener\">we looked at here<\/a>.<\/li><\/ul>\n\n\n\n<ul><li><strong>Accuracy<\/strong>: Whilst the technology is astonishing, a human filter is still essential, particularly in an employment context. A policy can address checking output for accuracy, bias and suitability for the specific context. This is likely to be more important in the employment context where the human impact of a decision can be high and also where those more intangible human and context factors (such as ethics and empathy) are often so important. It\u2019s also important to remember that generative AI is designed to produce the most plausible output and that is not necessarily the same as that which is the most truthful or accurate.<\/li><\/ul>\n\n\n\n<p>Whilst the positives of this technology can be usefully embraced, a tailored policy will ensure it\u2019s on the employer\u2019s terms. At a time when the regulation of the technology more generally has been described as \u201clittle more than waving a small red flag at an accelerating train\u201d, this could be critical.<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<p>If you have any specific questions you would like advice on, then please contact: <a href=\"mailto:Abi.Frederick@lewissilkin.com\">Abi.Frederick@lewissilkin.com<\/a> or <a href=\"mailto:koichiro.nakada@lewissilkin.com\">koichiro.nakada@lewissilkin.com<\/a> of Lewis Silkin LLP.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The world of work looks set to be revolu&hellip;<\/p>\n","protected":false},"author":2,"featured_media":4219,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[14],"tags":[59,1112,1678,1687,85,1159,1679,1688,112,1672,1680,1690,115,1673,1681,1691,146,1674,1682,147,1675,1683,358,1676,1684,57,545,1677,1685],"jetpack_featured_media_url":"https:\/\/www.centrepeople.com\/jp\/article\/wp-content\/uploads\/2025\/01\/202501-legal.jpg","_links":{"self":[{"href":"https:\/\/www.centrepeople.com\/jp\/article\/wp-json\/wp\/v2\/posts\/1967"}],"collection":[{"href":"https:\/\/www.centrepeople.com\/jp\/article\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.centrepeople.com\/jp\/article\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.centrepeople.com\/jp\/article\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.centrepeople.com\/jp\/article\/wp-json\/wp\/v2\/comments?post=1967"}],"version-history":[{"count":11,"href":"https:\/\/www.centrepeople.com\/jp\/article\/wp-json\/wp\/v2\/posts\/1967\/revisions"}],"predecessor-version":[{"id":4245,"href":"https:\/\/www.centrepeople.com\/jp\/article\/wp-json\/wp\/v2\/posts\/1967\/revisions\/4245"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.centrepeople.com\/jp\/article\/wp-json\/wp\/v2\/media\/4219"}],"wp:attachment":[{"href":"https:\/\/www.centrepeople.com\/jp\/article\/wp-json\/wp\/v2\/media?parent=1967"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.centrepeople.com\/jp\/article\/wp-json\/wp\/v2\/categories?post=1967"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.centrepeople.com\/jp\/article\/wp-json\/wp\/v2\/tags?post=1967"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}