Should AI be granted its own legal personality?

“The law is based on people and companies. But what if an AI soon makes independent decisions that have legal consequences?”

According to the UK Law Commission – an independent advisory body to the British legislature – we must take this question seriously. In its new discussion paper AI and the Law, they explore the legal bottlenecks surrounding artificial intelligence. One of the most compelling questions: should certain AI systems eventually be recognized as legal entities, with their own legal personality?

The discussion sounds futuristic, but it is more relevant than you might think. AI is becoming increasingly autonomous and is playing an ever-greater role in decision-making, including in legal contexts. Consider contract analysis, legal research, and the analysis of case files.

Why this discussion now?

The Law Commission identifies three trends that make this question urgent:

  1. AI is becoming agentic and independent – AI systems are increasingly performing tasks without human intervention. Think of Level 4 self-driving vehicles that operate fully autonomously in certain areas, or AI agents that independently plan, book, and pay for an entire trip. Some systems even make decisions based on complex trade-offs and self-developed strategies, as AlphaGo did in its matches against human world champions.
  2. The law already recognizes various types of legal persons – Think of companies, associations, or foundations. These can own property, enter into contracts, and be sued in court. Why shouldn’t an advanced AI system that acts independently also fall under this? In theory, it could, but in practice, this is not yet the case. Even the most advanced applications, such as self-driving vehicles, still have a human supervisor or the possibility of intervention. AI, therefore, does not (yet) operate completely independently of human intervention.
  3. Legal gaps in liability are emerging – In some scenarios, no one is truly liable for damage caused by AI. This creates a legal gap. A separate legal personality for AI could address this.

The Law Commission emphasizes that it is still too early to grant AI such a status. However, the idea is becoming increasingly realistic.

What does ‘legal personality’ actually mean?

An entity with legal personality can:

  • Own property,
  • Hold rights and obligations,
  • Enter into contracts,
  • And independently litigate or be sued.

This also means: bearing liability. And that is precisely the core of the debate. Legal personality is a legal construct. It is not reserved solely for humans or companies. Over time, other entities have also been granted legal status. Think of foundations, associations, or even temporary partnerships.

Similarly, nature is increasingly being granted legal personality. In 2017, the Whanganui River in New Zealand received its own legal status. Not because the river is going to sign contracts, but so that it can be better protected, with representatives allowed to represent its interests in legal proceedings.

Arguments for and against AI with legal personality

The Law Commission clearly outlines the advantages and disadvantages.

Advantages:

  • Liability can be better regulated and channeled.
  • Innovation may be promoted because developers no longer have to bear all the risks.
  • AI systems can be held personally responsible for their behavior.

Objections:

  • Developers could use AI as a legal shield to evade their own liability.
  • How do you give AI actual ‘assets’ or ‘resources’ to make it liable? This requires a completely new legal framework.
  • The moral and ethical question: do we want to treat machines as legal ‘persons’?

When would an AI qualify?

Not every AI system is a candidate for legal personality. A spam filter or an automated assistant that books appointments in your calendar does not need to become a legal person. The discussion focuses primarily on AI agents: systems that independently perform complex tasks and make independent decisions.

Criteria being considered include:

  • Degree of autonomy,
  • Capacity for self-learning,
  • And the impact of the decisions.

But where do you draw the line? And who decides that? Ultimately, the legislature will have to make those choices.

What should the legislature do now?

The Law Commission does not propose specific legislation but emphasizes three urgent points of attention for the legislature in Great Britain:

  1. Investigate the possibility of legal personality for AI Not for every system, but perhaps for the most advanced, autonomous AI agents. The legislature must explore if and how this can take shape, including oversight, identification, and liability rules.
  2. Prevent a liability vacuum Future legislation must clarify who is responsible within complex AI chains: developers, suppliers, users, or all of them? This prevents victims from falling through the cracks.
  3. Ensure explainability and the right to object Especially when AI is used within the government or criminal law, it must remain possible to understand, review, and challenge decisions. This requires transparency, but also new legal frameworks.

Conclusion: an old debate with new urgency

The question of whether technology should be granted its own legal personality has been around for some time. In the context of self-driving cars, autonomous robots, and drones, ways have been sought for years to distribute liability fairly and effectively. What is new is that AI can now perform tasks that are much closer to core legal functions: interpreting contracts, making decisions based on complex datasets, or entering into interactions that can be legally binding.

The core of the issue has not changed: how do we prevent a liability vacuum from arising when technology acts independently? What is changing is the speed with which this question is now approaching us. AI systems are developing at a pace that puts pressure on legislators to work out scenarios now, before practice overtakes us.

For the time being, there is always a human “in the loop,” and legal responsibility lies with natural or existing legal persons. But if AI agents soon act truly independently and unpredictably, we must make a choice: do we continue to work within the current model, or do we give certain AI systems their own legal status? That debate is not new, but it has never been so urgent.

LegalMike in Action

Every two weeks on Friday afternoons, we organize a digital knowledge session. During these sessions, we demonstrate how to optimally utilize LegalMike in your legal practice, from real-world examples to practical tips.

The next knowledge session will take place on April 10.

Or join directly via Google Meet.