EU AI Act: What does it mean for UK business post-Brexit?
The EU AI Act, aimed at controlling AI systems on a risk-based approach, gained an overwhelming majority in favour, with 523 votes.
Lawmakers in the European Parliament have stamped their approval on the landmark EU Artificial Intelligence (AI) Act today, as the bloc races ahead of the UK in regulating the rapidly developing technology.
The EU AI Act, aimed at controlling AI systems on a risk-based approach, yesterday gained an overwhelming majority in favour, with 523 votes supporting it, while 46 opposed and 49 abstained.
Following a second vote, seen as a formality, the Act is expected to come into force in late May.
Companies are expected to comply within a year; otherwise, they could face fines of up to €35m (£20m) or 7 per cent of global turnover for any breaches.
Here is everything you need to know:
What does the EU AI Act entail?
The EU AI Act sets out to regulate various AI applications based on risk levels. Bans on certain use cases, such as “emotion recognition” systems in workplaces, carry the largest fines.
Guillaume Couneson, TMT partner at Linklaters, explained: “The EU legislators have proposed a risk-based framework for AI regulation, where each level of AI risk corresponds to distinct legal requirements with varying grace periods prior to enforcement.
“AI systems classified as ‘high-risk’ will get a 24-month window, whereas certain embedded ‘high-risk’ AI systems, for example those used as a safety component of a product, will get up to a three-year timeline for adherence.”
Some experts are concerned about the way the rules are written, warning they are not clear enough, especially in how they relate to other digital regulations.
When does it come into force?
While translations are still being finalised across the legislation’s 24 language versions, it is expected that the Act will be officially published in late May or June, with enforcement commencing later this year and continuing over the next few years.
The AI Act will enforce compliance in staggered periods: prohibited uses by the end of this year, general purpose AI provisions by summer 2025, and high-risk AI regulation by summer 2026.
“Companies need to start preparing as soon as possible to ensure they do not fall foul of the new rules,” said Marcus Evans, partner and European head of data privacy at Norton Rose Fulbright.
How will it affect UK plc?
The EU AI Act is expected to have a significant impact on UK businesses, as compliance will be necessary for those wishing to do business internationally, similar to counterparts in the US and Asia.
It will bite any UK business selling AI systems on the European market or putting AI systems into service in the bloc.
It is “crucial” for businesses to establish and maintain robust AI governance programs to ensure compliance, according to Marcus Evans, partner and European head of data privacy at global law firm Norton Rose Fulbright.
Enza Iannopollo, principal analyst at research firm Forrester, said: “Over time, at least some of the work UK firms undertake to be compliant with the EU AI Act will become part of their overall AI governance strategy, regardless of UK specific requirements – or lack thereof.”
Why does the UK not have an AI Act?
As the UK government looks to remain as innovation-friendly as possible, it is taking a ‘light touch’ approach to AI regulation.
In its recent AI white paper, the government said it is choosing to use existing regulators over the creation of a central body authority dedicated to AI. Ministers have argued this is a more agile approach to the issue.
Some £10m will go towards upskilling regulators to deal with the potential risks that AI poses. Regulators have until the end of April to publish their current plans in response to AI risks and opportunities.
“The landscape post-UK general election remains uncertain however, with Labour mulling both closer alignment to the EU on regulatory issues as well as stronger corporate accountability in the wake of the Horizon Post Office Scandal,” said John Buyers, head of AI at Osborne Clarke.
Critics have argued that the government’s response lacks urgency and fails to provide concrete regulatory frameworks.
However, Alois Reitbauer, chief technology strategist at software company Dynatrace, has argued that the UK actually stands ahead of the EU.
He said: “There is a danger of the EU falling behind the rest of the world if it only considers AI as a negative force to be contained. It needs to balance new regulatory controls with investments that encourage research into positive use cases for AI that can help solve some of the world’s most pressing challenges.”
Reitbauer added that the UK’s current approach makes it an attractive destination for firms looking to establish a base to invest in AI-based research in Europe.
According to the government, the UK is the world’s third largest AI economy, worth $21bn (£16.4bn), behind only the US and China, and estimated to grow to over $1tn (£780bn) by 2035.