All articles

Didn’t we leave the EU? The EU AI Act

Author:

Tim Russell

Modern Workspace

•  Sept 30, 2024

The internet transcends geographical borders. Although the UK has exited the EU, digital boundaries and global information access blur the lines when defining electronic communications by physical country limits.  

The EU AI Law was enacted by the European Parliament on March 13th, 2024, following its initial proposal by the European Commission in April 2021. With the law now in effect, it's crucial to understand its implications for you and your company, especially considering the UK's departure from the EU.  

This article will outline specific scenarios and offer my insights on how to approach the EU AI Act. While my perspective is no substitute for legal advice, I hope it provides a clearer understanding of its potential impact. 

Why does the EU AI Act apply to the UK?

Looking at the key question that arises from the EU AI Act is whether it applies to the UK, which left the EU in 2020 and completed the transition period at the end of 2021. The answer is not straightforward, as it depends on several factors, such as the nature of the AI system, the location of the provider and user, and the type of data involved. 

To understand why, we will first break down the Act to understand its scope of applicability. 

  • Coverage: Providers and Users 
  • Scope: Development, Deployment and Use of AI systems in the European Union 
  • System Risk Classifications: Unacceptable, High, Limited and Minimal 

Understanding the broader intent and taking these specifics in general, the EU AI Act will apply to the UK in the following scenarios: 

  • The UK-based provider or user of an AI system offers or uses it in the EU, either directly or through intermediaries; 
  • The UK-based provider or user of an AI system affects natural or legal persons in the EU, such as by processing their personal data or influencing their behaviour; 
  • The UK-based provider or user of an AI system falls under the extraterritorial scope of the EU law. 

There are basically three scenarios: the data being handled concerns EU citizens, the system is accessible by individuals in the EU, or your organsiation is subject to EU law.   

In these cases, you will have to comply with the EU AI Act and its accompanying measures, such as certification, registration, conformity assessment, governance and supervision. The UK will also be subject to the enforcement and penalties of the EU AI Act, which can range from fines to bans, depending on the severity of the infringement and the risk level of the AI system. 

However, there are also some exemptions and exceptions that may limit the applicability of the EU AI Act to companies in the UK. For example, the act does not cover AI systems used exclusively for military or national security purposes, or AI systems developed or used by public authorities for safeguarding public security. If you find yourself operating or developing in any of these areas, you can consider a possible exception. Additionally, the act allows for deviations for certain AI systems that are subject to international agreements or obligations, such as those related to aviation, maritime or nuclear safety. Furthermore, the act provides for the possibility of mutual recognition and adequacy decisions for third countries that have equivalent or compatible legal frameworks for AI regulation. This last statement is the most interesting in the act, but, as with all legal frameworks, is open to interpretation. 

Therefore, the EU AI Act does not apply to the UK in a uniform or absolute manner, but rather in a nuanced and contextual way, depending on the specific circumstances of each case. Companies and individuals in the UK will have to balance the benefits and costs of complying with the EU AI Act, as well as the potential opportunities and challenges of developing its own AI regulatory regime. The EU and the UK will also have to cooperate and coordinate on AI matters, as they share common interests and values, as well as inter-dependent markets and societies. 

The UK AI Conference 

The UK AI conference, which took place in London in June 2024, was a major event that brought together experts, policymakers, industry leaders and civil society representatives to discuss the opportunities and challenges of AI in various sectors and domains. The conference also aimed to showcase the UK's achievements and ambitions in AI innovation and regulation, as well as to foster dialogue and cooperation with other countries and regions, especially the EU. I wrote a separate article covering my perspective of this at the time.

One of the main topics of the conference was how the UK's approach to AI regulation aligns with the EU AI Act, which, at the time, had not been entered into EU law. The UK government stated that it supports the overall objectives and principles of the EU AI Act, such as ensuring trust, safety, accountability and human oversight of AI systems, as well as protecting the fundamental rights and values of the users and affected parties. The UK government also acknowledged that the EU AI Act has implications for the UK's market access and trade relations with the EU, as well as for the UK's global competitiveness and influence in the AI field. 

However, the UK government also expressed some reservations and criticisms about the EU AI Act, such as its complexity, rigidity, scope and enforcement. The UK government argued that the EU AI Act imposes excessive and disproportionate burdens and costs on the AI providers and users, especially the small and medium-sized enterprises (SMEs). This may stifle innovation and hamper the adoption and diffusion of AI technologies. The UK government also claimed that the EU AI Act creates legal uncertainty and fragmentation, as it does not fully harmonise the existing national and sectoral laws and regulations that affect AI systems, nor does it clarify the relationship and compatibility between the EU AI Act and other EU legislation, such as the General Data Protection Regulation (GDPR) or the Digital Services Act (DSA).  

Interestingly, the UK government questioned the effectiveness and legitimacy of the EU AI Act's extraterritorial application and enforcement, as it may create conflicts of jurisdiction and sovereignty with other countries, as well as raise issues of reciprocity and equivalence. 

How the UK and EU differ on AI 

As you can probably see, all is not smooth between the UK and the EU when it comes to the AI Act, and this application of Law creates a level of ambiguity that leaves everyone open to their own interpretation. 

Not willing to just disagree, the UK government proposed a different and alternative approach to AI regulation, which it described as more agile, flexible, proportionate and risk-based. The UK government emphasised that its approach is based on the following elements: 

  • A clear and consistent definition of AI that covers only the technologies that pose significant risks to the public interest or individual rights. 
  • A streamlined and dynamic categorisation of AI systems that reflects their actual and potential impact and harm rather than their intended use or purpose. 
  • A balanced and tailored set of requirements and obligations for each category of AI systems that considers the specific context and characteristics of the AI system, the provider and the user, and the existing legal and regulatory frameworks that apply to them. 
  • A voluntary and incentive-based certification and registration scheme for AI systems that encourages best practices and standards rather than mandatory and rigid conformity assessments and audits. 
  • A collaborative and transparent governance and supervision mechanism for AI systems that involves multiple stakeholders and levels of authority, as well as international coordination and cooperation. 
  • A pragmatic and proportionate enforcement and sanctioning regime for AI systems that focuses on prevention and remediation rather than punishment and prohibition. 

It’s interesting and refreshing to see the UK’s approach was more of a trust-based system and a reward concept for businesses that certify and adhere to standards, as opposed to a more mandated approach as proposed by the EU. Although there are benefits to both sides, I believe we should adopt the guiding principles of the EU act, while working along the guide of the UK recommendations; unless of course, you are working in the EU, at which point you will have little choice. The UK government claimed that its approach to AI regulation is more conducive to fostering innovation and growth, as well as to ensuring trust and accountability, in the AI sector, and having worked with startups and SME’s, I must concur.  

The UK government stated that it would respect and comply with the EU AI Act's rules and obligations if they are compatible and consistent with the UK's own legal and regulatory framework for AI. The UK government also said that it would cooperate and coordinate with the EU authorities and agencies, such as the European Artificial Intelligence Board (EAIB) and the national supervisory authorities, to ensure a smooth and effective enforcement of the EU AI Act in the UK, although again, I believe there are few precedents or legal frameworks in place to make this happen immediately.

To prove the point, the UK government also noted that there may be cases where the EU AI Act's rules and obligations are not applicable or appropriate for the UK, due to its different constitutional and institutional arrangements, as well as its different policy and regulatory priorities and preferences. In such cases I would expect the UK to negotiate and agree with the EU on mutual recognition agreements, that would allow the UK to adopt and implement its own approach to AI regulation, without compromising the EU's standards and objectives. Whether or not the EU will set sanctions if the UK doesn’t agree to act on or enforce the EU AI act within its territorial borders remains to be seen. 

Three possible scenarios  

Let’s re-focus on the act’s impact on you and your business.  If you have an internet-facing presence, such as a web page, and this contains or utilises AI capabilities - even a chatbot or suggested product link that could be created from collated information - then it would be fair to assume you will need to consider the EU AI Act, especially if it operates or offers its services in the EU market, or if it deals with personal data of EU citizens. 

The EU AI Act applies to all providers and users of AI systems, regardless of their location, as long as the AI system affects people or entities in the EU. Therefore, any web-facing service that uses AI to provide information, recommendations, decisions, or actions to its users or customers, would probably fall under the scope of the EU AI Act, and you would have to comply with its rules and obligations.

Depending on the type and impact of the AI system, the web-facing service might have to undergo a conformity assessment, register its AI system in a database, provide information and documentation to the users and authorities, implement human oversight and intervention mechanisms, ensure the quality and security of the data and algorithms, and monitor and report any incidents or malfunctions of the AI system. Failing to do so could result in fines, sanctions, or bans from the EU market, although, as we can see above, there is no clear guide on how this would be enforced across borders. 

The second scenario I want to look at is if a business who employees individuals solely in England, Scotland and Wales but utilises, for example, public cloud services from the likes of Microsoft, Amazon or Google, and provides internal AI services to its staff but doesn’t handle EU data or offer an internet-facing AI service. Will you need to conform as the data centres from the public cloud providers could be in the EU? 

It is very complex and certainly something you should pass to your legal teams, however, based on the current draft of the EU AI Act, I can provide some tentative and provisional answers from my perspective. 

Firstly, it is important to note that the EU AI Act applies to all providers and users of AI systems that are in the EU, or that affect people or entities in the EU. If your business uses public cloud services from Microsoft, Amazon, or Google, and those services involve AI systems that are in the EU, then the business would have to comply with the EU AI Act's rules and obligations, regardless of whether the business itself is in the UK or handles EU data.  

This means that the business would have to ensure that the AI systems used by the public cloud services meet the relevant requirements and obligations under the EU AI Act, such as data quality and security, human oversight and intervention, information and documentation, registration and conformity assessment, etc.  If you are simply consuming an AI capability from a public cloud provider, my perspective is that they must comply with the EU AI Act. However, if you are creating your own models, and these could potentially run on a public cloud infrastructure in the EU. You will have to either prevent that from happening or comply with the EU AI Act. 

Secondly, it is also important to note that the EU AI Act distinguishes between different categories of AI systems based on their level of risk and impact. The highest-risk AI systems, such as those used for biometric identification, social scoring, or critical infrastructure, are subject to the strictest and most comprehensive set of rules and obligations. The lower-risk AI systems, such as those used for entertainment, education, or marketing, are subject to lighter and more flexible rules and obligations.  

The EU AI Act does not apply to minimal-risk AI systems, such as those used for spam filters, video games, or online maps, which are deemed to pose no or negligible risk to the fundamental rights and safety of people and entities. Therefore, depending on the type and impact of the internal AI services that your business provides to your staff, you might fall under different categories of the EU AI Act, and would have to comply with different rules and obligations accordingly, or maybe even be classified as minimal risk. 

Finally, it is also important to note that the EU AI Act is not the only legal and regulatory framework that applies to AI systems in the EU. You would also have to consider UK-specific regulations and guidance such as the Data Protection, Privacy and Electronic Communications Regulations, which came into force at the start of 2020. This regulation transposes the GDPR into UK domestic law, with some minor modifications, and ensures that the UK maintains a high level of data protection standards after leaving the EU. Any company developing an AI capability that involves the collection, use, or sharing of personal data of UK individuals would have to comply with the data protection principles, rights, and obligations under this regulation.  

Consideration must also be given to the Centre for Data Ethics and Innovation's (CDEI) Framework for Trustworthy AI (published in March ‘21). This framework provides a set of guidance and recommendations for the ethical and responsible development and deployment of AI in the UK, based on the core values of transparency, accountability, fairness, and societal benefit. The framework is not legally binding, but it aims to complement and support the existing laws and regulations that apply to AI, such as the Data Protection Act, the Equality Act, the Human Rights Act, and the Consumer Rights Act.  

Regardless of the scenario, if you are creating a public, cloud-hosted AI capability, even if this is for internal use only, you will have to at the least demonstrate that its AI is trustworthy, ethical, and beneficial to society and the data processing and storage regulations meet or exceed those specifications created by the UK government. 

Summary

In summary, if you are developing an AI system in the UK, you should ask yourself the following questions: 

  • What is the level of risk and impact of your AI system on the fundamental rights and safety of people and entities in the EU and the UK? 
  • How will you ensure that your AI system meets the relevant requirements and obligations under the EU AI Act, the Data Protection, Privacy and Electronic Communications Regulations, and the CDEI's Framework for Trustworthy AI? 
  • How will you benefit from adopting a trust-based and reward-oriented approach to AI regulation, as proposed by the UK government? 

These are some of the key questions and considerations that you should bear in mind if you are developing an AI system in the UK, especially if you are using public cloud services or targeting the EU market. However, this is not an exhaustive list, nor a definitive answer, as the regulatory landscape of AI is constantly evolving and complex.  

You should always consult with your legal advisors and experts before making any decisions or taking any actions regarding your AI system. This article is based on my opinion and experience as a Chief Technologist at CDW UK, a leading provider of technology solutions and services in the UK and across the world. I hope you found it useful and informative, and I welcome your feedback and comments. Thank you for reading.  

Note: this is not legal advice. CDW advise you to liaise with your legal counsel for how to approach this specifically for your organisation. 

Contributors
Share
Subscribe to email updates

Related insights

MODERN WORKSPACE THRIVING IN THE DIGITAL AGE
  • Modern Workspace

Modern Workspace: Thriving in the Digital Age

Read this article on the Modern Workspace to discover why it's is important and how it can help organisations thrive in the modern age

Read article
Smart Offices 4 Of 4 Summary
  • Modern Workspace

Harnessing Smart Offices and Emerging Technologies for the Modern Workspace

Explore how Smart Office technologies enhance productivity, employee well-being, and organisational efficiency in the Modern Workspace.

Read article
Application Marketplace Summary
  • Modern Workspace

Application Marketplace 

Learn how to optimise application delivery for business environments by addressing IT challenges, enhancing user experience, and meeting modern workplace demands.

Read article