探花精选

Where AI Regulation Stands in the UK, According to a Tech Lawyer

Subscribe to 探花精选's Next in AI Newsletter
Raymond Sun
Raymond Sun

Updated:

Published:

When it comes to AI, the UK has been at the forefront of innovation, being home to the invention of the Turing machine to modern breakthroughs in neutral networks.

ai regulation in uk

Reflecting this strong innovation culture is the , which currently leaves it to national departments and regulators to manage AI risks in their own sector, rather than pushing for AI-specific laws.

How is the UK’s strategy going so far? In this article, I’ll explore the key legal developments in the UK AI market, and how they compare with its neighbour across the channel, the European Union (EU).

How is AI regulated in the UK?

As of March 2024, the UK doesn’t have a binding federal law specifically regulating the development, deployment and use of AI.

Similar to the US, the UK relies on a fragmented patchwork of existing laws.

But guess what – that’s all part of the government’s plan.

Under its national AI strategy, the UK has expressed ambitions to become a “global AI superpower” within the next decade.

And they have a good shot at it – the UK currently has the third highest number of AI companies in the world (after the US and China), and is the second most likely global destination for AI research.

So when it comes to regulation, the British favor a light pro-innovation approach. In fact, under released in March 2023, the government said they will not rush to introduce new targeted legislation on AI yet.

This position was at the Global AI Safety Summit hosted by the UK in October 2023 (again, another strategic move by the UK to catch the global spotlight).

But the UK government is not completely hands off either. In March 2023, the government established a base set of ethical principles to guide the use and development of AI systems across all sectors.

These ethic principles are:

  • Safety, security and robustness – An AI system should function in a robust, secure and safe way through its lifecycle, with regular check-ups and supervision.
  • Appropriate transparency and explainability – Users should know when they are interacting with an AI system, and how it works to make a decision.
  • Fairness – An AI system shouldn’t undermine legal rights, discriminate unfairly or create unfair market outcomes.
  • Accountability and governance – An AI system should be supported by governance measures which ensure effective oversight and clear lines of accountability.
  • Contestability and redress – Individuals should be able to appeal or challenge a harmful or adverse decision made by an AI system.

The idea is that the central government will rely on regulators to interpret and apply the above ethical principles within their respective sectors. This allows each regulator to deal with risks and issues of AI in a way that works for their sector.

To date, here are some recent developments from various regulators in the UK:

  • The competition watchdog (the Competition and Markets Authority) has been consulting with the community to that ensure foundation models are developed in a way that promotes market competition and respects consumer rights. The Authority is over market competition issues.
  • The privacy regulator (the Information Commissioner’s Office) has been to see what needs to be changed to fit the generative AI era. They are also (the company behind Snapchat) over privacy concerns associated with its ‘My AI’ chatbot.
  • The cybersecurity agency (the National Cyber Security Centre) to help developers bolster the cybersecurity of their AI systems.
  • The regulator for medicines and medical devices (the Medicines and Healthcare Products Regulatory Agency) announced ” to promote the development and deployment of AI software and medical devices in a monitored environment.

How does the UK compare to the EU?

The EU’s incoming AI Act will specifically regulate AI systems across all sectors based on their risk level under the banner of one single law.

The EU and UK thus sit at two ends of a spectrum. The former has gone for a centralized ‘one law fits all’ approach while the latter prefers a lighter decentralised framework.

While it might be too early to say whose approach is better, the British approach does seem less rigid, allowing businesses to innovate more quickly while also giving the government flexibility to respond to harms as they arise.

Given the UK’s competitive ambitions in the AI space, it’s likely that the UK will keep their differences with the EU and maintain their pro-innovation stance.

Has the UK made the right move?

In early Feb. 2024, the UK government , which reaffirmed their approach and suggests things are working well for now.

While such a system is flexible, the government should not let it lead to a messy patchwork of siloed developments across sectors.

That's what happened in the US where federal and state agencies did their own thing until it got too messy and President Biden had to step in with an executive order to regain coordination among those agencies.

Another limitation of the UK approach is that it relies on the regulators to do their job effectively. This may be hard to guarantee.

For example, the working group at the Copyright Office around the use of copyrighted material in AI training. If this deadlock drags on, should the central government step in to take over (and if so, when)?

The UK will need to find the right balance in policymaking responsibilities between the central government and regulators. And this balance will need to adapt and keep pace with advances in AI technology.

But so far, the government seems to be on the right track.

In November 2023, they established the , a central hub tasked with testing the safety of emerging types of AI against the above ethical principles. Under their updated whitepaper, the government also recently announced:

  • potential new AI safety requirements for developers building “highly capable general purpose AI models”;
  • over ?100m funding for various AI safety-related projects and new research hubs; and
  • exploring whether a legal “duty to regard” should be established to require regulators to implement the common AI principles.

These initiatives are not tied to one sector, and should help improve central coordination and consistency among regulator-led activities.

What role do the courts play in this?

A key characteristic of the British legal system is the ‘common law’ – i.e. a system where courts are bound by past similar decisions (precedents) when determining issues.

Some high-profile AI-related cases have been brought before the UK courts.

Currently, the most prolific one is the against Stability AI.

It alleges that Stability AI used Getty Images’ stock photos without a licence to train its models. The outcome of this lawsuit will likely have significant implications for intellectual property law around AI.

Could the courts play a role here in shaping UK’s AI regulation?

Realistically, not much. The role of court is limited to interpreting existing laws (e.g. clarifying ambiguities in legislative text) within the context of a litigated case.

A court cannot create new laws. This makes the court system a reactive rather than proactive one. Plus there are practical issues of time and costs which deter litigation in the first place.

So it’s likely that central government policy and regulator-led activities will carry the weight of AI regulation in the UK.

Time will tell whether the UK’s ‘pro-innovation’ framework will truly boost innovation and propel the nation into AI superpower status.

Editor’s Note: This article is not intended to provide legal advice or to be a comprehensive guide or reference.

Topics:

Related Articles

A weekly newsletter covering AI and business.

SUBSCRIBE

The weekly email to help take your career to the next level. No fluff, only first-hand expert advice & useful marketing trends.

Must enter a valid email

We're committed to your privacy. 探花精选 uses the information you provide to us to contact you about our relevant content, products, and services. You may unsubscribe from these communications at any time. For more information, check out our .

This form is protected by reCAPTCHA and the Google and apply.