
Though not technically new, recent technological breakthroughs in AI (in particular, generative AI) has the potential to cause massive disruptions in the way we work.
If deployed correctly and responsibly, the use of AI has the potential to reduce inaccuracies, increase efficiencies, and automate basic decision-making to allow more time to consider complex strategy and governance issues. However, others are more sceptical, pointing out that certain industries, like the creative and software industries, are likely to see a disproportionate impact on the workforce with reports already suggesting mass redundancies in these industries as workers are replaced by AI tools.
This article considers some of the key employment and data law considerations an employer should consider regarding the use of AI (especially generative AI) in their workplace. It’s also important to note that there are also intellectual property rights and copyright concerns when using AI in the workplace, though these fall outside the scope of this article.
Decoding what is AI, generative AI and should you allow it in your workplace?
There is no “one size fits all” approach to addressing AI opportunities (or issues) within the workplace. The use of AI should be careful, with consideration given to the specific needs and limitations of your business. However, studies are showing that many employees, with or without their employer’s knowledge, are already using AI software within their day-to-day lives at work. So, we recommend implementing an AI policy (especially, one concerning generative AI) sooner rather than later.
An employer’s workforce’s familiarity with AI will differ significantly, and many workplaces will benefit greatly with a policy that sets out clearly terms commonly used when describing the different AI software / technology.
For example, many people associate AI with generative AI; software that can create original content such as text or media (like ChatGPT) in response to a user’s prompt.
However, AI more generally (as IBM explains) describes technology which enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy. This could include, for example, using algorithms to sort data, recognise patterns and make decisions. Think Netflix recommending you a new series to binge watch or a spellcheck programme telling you when you’ve used a semi-colon incorrectly.
The number of generative AI applications is growing rapidly and is increasingly being embedded in existing services and applications (for example, Gemini, Google’s AI assistant and Microsoft Copilot).
For some large well-resourced employers, they may use an AI system under a licence from a third-party developer to embed itself across the work IT systems. These systems are typically tailored to the business and ensure data input remains internal, rather than being uploaded to the internet or shared externally.
For smaller less-resourced employers, a policy may set out a list of permitted applications (or vice-versa prohibited applications) that an employee can use. Additionally, employers may wish to set out that generative AI can only be used for specific uses such as idea generation; and conversely, guidelines for when it should not be used such as drafting confidential reports.
It is also important to understand the limitations of generative AI – while impressive, generative AI is far from perfect; false, biased or otherwise inappropriate results are still commonplace. Any policy should remind employees to always fact and sense check results before being relied upon, and that content generated is appropriate and fit for purpose.
Use of AI in the employment lifecycle
One of the fastest deployments of AI in the workplace is for the automation of HR processes. This may be through the use of CV screening tools to filter job applicants, to select employees for redundancy or to allocate work to a pool of workers / employees. Other uses include surveillance or to manage poor performers.
However, doing so comes with risk, for example, where AI software has been trained to identify suitable candidates based on the existing workforce (which lacks diversity), the AI is most likely going to perpetuate and deepen those biases by picking candidates which reflect the existing workforce. Some job applicants already express a concern that they are not being put forward for interviews because of their age. The difficulty will be in proving that this is the case.
Similarly, AI software that makes promotion or pay-related decisions may rely on prompts that cuts across protected characteristics (such as relying on attendance rates which may disproportionately impact women).
Both instances can give rise to discrimination claims. As such, employers should be attuned to the risks before deploying AI in HR functions, and safeguards should be put in place. Likewise, staff using generative AI to create offensive or harmful content may also expose the business to harassment type claims or cause fractures in the workplace.
Lastly, AI surveillance (whether monitoring remote workers or those in the workplace) may interfere with an employee’s right to privacy under Article 8 of the European Convention on Human Rights.
Data Protection
A major concern for many employers is the potential conflict between using AI in the workforce while meeting their obligations under the various UK data laws (UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018).
Where employees may input customer data and other materials into the AI system, there is likely to be legal concerns around confidentiality, as well as implications regarding data protection and data privacy obligations.
As a reminder, employers require a lawful basis for processing personal data (Article 6, UK GDPR), and it must be processed lawfully, fairly and in a transparent manner in relation to the data subject (Article 5(1)(a), UK GDPR). Employers also have an obligation to ensure data is accurate and limited to what is necessary (Article 5(1)(c) and (d), UK GDPR).
Such obligations may appear incongruous with the use of personal data within an AI system. Accordingly, employers should be explicit about the purpose and usage of personal data and its likely employees will require training before having access to certain AI software. Necessary preparatory steps like anonymising or cleaning personal data (eg removing errors) should be baked into policies or protocols.
Where an employer licenses AI software (particularly generative AI), they should be satisfied there are adequate data protections and (if necessary) protocols around international transfers of data.
Further, where employers do adopt new AI software, this may trigger a requirement to undergo a data protection impact assessment which is required when the processing of personal data is “likely to result in a high risk to the rights and freedoms of natural persons” (Article 35(1), (3) and (4), UK GDPR).
Lastly, individuals are protected under the GDPR regarding decisions based solely on automated processing (including profiling) if the decision produces legal effects or similarly significantly affects them (which ICO guidance suggests including employment opportunities). Upcoming relaxations of the existing rules (which provide for a complete prohibition) are set to come in force soon with the government announcing that it aims to publish commencement regulations in December 2025, which allows solely automated decision making where safeguards are in place to:
- Provide an individual with information about the decision taken in relation to them.
- Enable an individual to make representations and to contest the decision.
- Enable an individual to obtain human intervention about the decision.
Concluding thoughts
Employees’ increasing use of AI technologies, raise numerous legal and ethical implications for employers. However, AI can bring significant value to a business’ processes and results. Employers would be wise not to put their head in the sand regarding its use in the workplace.
This is our first article in our AI in the workplace series, so keep your eyes peeled for our next entry.
If you have any questions about the use of AI in the workplace or how we can assist you developing an AI policy, reach out to the Employment Law Team or your usual Brecher contact.
This update is for general purpose and guidance only and does not constitute legal advice. Specific legal advice should be taken before acting on any of the topics covered. No part of this update may be used, reproduced, stored or transmitted in any form, or by any means without the prior permission of Brecher LLP.