Artificial Intelligence is dominating the boardrooms of the world in every sector. Not often has a transformation been so regularly referred to as a revolution.

The truth is that AI has been with us for years in the shape of machine-learning processes. Automation of processes surrounds us, from working out insurance premiums to whether we can get a mortgage. The key difference now is the generation of content that looks like a human created it.

The opportunity presented by AI for business is only limited by your imagination. But, of course, all new technology brings with it legal risk.

Using data protection experience to approach AI risk

In February 2024, Addleshaw Goddard hosted almost 100 data protection experts from across the country where the attendees universally acknowledged that AI was the biggest challenge on their desk, with many already engaged on projects in their business.

It feels like the next GDPR for data protection lawyers and getting up to speed on the regulatory risk of AI is critical.

It’s no surprise that data protection advisors are in the frame for supporting their businesses or clients, given their experience in managing risk and the tools that have been developed from a privacy perspective to drive cultural change.

Our gathering recognised that they have a big part to play in the coming years to deploy effective AI governance across their businesses, especially where AI includes use of personal data. Key risk areas we are already seeing include employment screening of CVs, workplace monitoring techniques such as scanning emails for risk flags, and auto-generated customer service communications.

Given the storm is already here, and with legislation in the UK being put off by the current government (while the EU already has finalised a complex AI regime) businesses need to get to grips with governance of AI now.

However, a lot of law already regulates AI. Data protection laws in the UK and Europe have fiercely regulated automated decision-making applications that use personal data and that have significant effects on people. The framework provided to us in data protection law offers a guide for AI governance that data advisors are well versed in, which can be built upon.

Top 5 topics to talk to your board about

Key areas to consider include:

1. Setting up an AI governance or working group to brainstorm and identify business AI opportunity and risk.

2. Embedding AI concepts into existing project sign-off processes, including privacy impact assessment processes where personal data is in play. This includes considering specific risk areas which would need sign off, such as dealing with unconscious bias, understanding intellectual property issues and how your company data is used.

3. Ensuring security protocols are updated to deal with specific AI security risks, which will be very different from normal IT projects such as software outsourcing. Identifying how data sets used to run AI are protected will be key.

4. Making sure your business standard contracts are reviewed and managed to deal with AI risk where you are engaging a supplier to develop AI solutions. This will include restrictions on use of your own data by suppliers and commitments on development. Ongoing support, such as ensuring you get help in managing issues that arise with data subjects will be key. Audit rights will be key also.

5. Factoring in legal risks around use of personal data will be crucial, including identifying a lawful basis to build your AI tool where it uses personal data and managing rights of data subjects will be needed. Data subjects have the right to understand how AI works and to have a human intervene in decision-making.

Getting ahead of the challenge

If your business is not talking about AI already, your competitors will be. Getting ahead of this challenge will be business critical and relying on your existing governance for data protection seems an easy win to get ahead of the challenges you will face.