From isolated experiments to mature, secure, and professional use

“AI is a valuable tool for the lawyer, provided it is deployed with regard for the core values of the legal profession.”

In a short space of time, generative AI has permeated almost every legal practice. Many firms use applications for summarizing files, creating first drafts, or exploring legal questions. This often still happens on an ad hoc basis, using trial accounts and isolated experiments in between other tasks.

With the publication of the recommendations AI in the legal profession, the Netherlands Bar (NOvA) is taking a clear step forward. The message is clear: AI is permitted, but not thoughtlessly. Its use must fit within the core values of the legal profession and within a professionally organized firm.

In this blog, we walk through the most important aspects of the guidelines step by step. We show how they relate to the Flemish approach and what this means for the daily practice of lawyers and legal teams.

Why AI guidelines are now indispensable for the Dutch legal profession

The legal profession finds itself in a field of tension. On one hand, there is pressure from clients to work faster and more efficiently. On the other hand, there are high requirements for due care, confidentiality, and independence.

AI intensifies that tension. The technology makes tasks faster and seemingly simpler, but also increases the risks of errors, data breaches, and opaque decision-making.

The NOvA has therefore chosen a clear starting position:

  • AI is a tool, not a replacement for the lawyer
  • Core values remain leading, even as technology changes
  • The lawyer always remains ultimately responsible for advice and procedural conduct

The new recommendations are therefore not a brake on innovation, but a framework. They clarify under which conditions AI may strengthen the practice.

AI within the core values of the legal profession: technology in service of the profession

The guidelines are not structured around technology, but around the classic core values: expertise, confidentiality, independence, integrity, and partiality. This is an important signal. AI is not seen as a separate IT issue, but as something that directly touches upon the identity of the profession.

In essence, it comes down to this:

  • Expertise: you do not use a tool that you do not understand
  • Confidentiality: you never lose control over client data
  • Independence: AI does not direct your judgment; you direct the AI
  • Integrity: you are honest about the way you work
  • Partiality: you only use AI in a way that strengthens your client’s position, not weakens it

The recommendations specify what this means for daily choices per core value. This makes the document practically useful for policy and for individual lawyers.

Expertise and AI proficiency: why every lawyer must master the basics

The NOvA is very clear: a lawyer who uses AI must know what they are doing. Basic knowledge of generative AI is “essential for every lawyer”.

That expertise has several layers.

Knowledge of the technology
You do not need to become a programmer, but you must understand:

  • What a language model does
  • That outcomes are probabilistic, not guaranteed to be correct
  • That hallucination and bias are not exceptions, but inherent characteristics

Knowledge of the legal frameworks
The Bar explicitly links AI to the AI Act and to existing regulations on data protection and cybersecurity. This means that as a lawyer, you must know what obligations arise as soon as you process personal data, business-sensitive data, or criminal file information through an AI application.

Knowledge of working safely in practice
The recommendations make expertise concrete:

  • Follow training on generative AI, language models, error detection, and security
  • Keep knowledge up to date through continuing education and internal exchange of best practices
  • Train yourself in formulating clear instructions and critically reading AI output

The underlying line is sharp: a lawyer who uses AI without this basic knowledge is acting carelessly. AI proficiency is becoming part of the lawyer’s professional profile.

Verification and ultimate responsibility: the lawyer remains the final check

One of the strictest requirements in the guidelines is the duty of verification. AI may assist you, but the system never takes over the work of the legal professional.

The NOvA formulates this very concretely:

  • Always manually check citations, case law, and facts before using them
  • Preferably use applications with source referencing, so you can easily verify if the reference is correct and complete

This aligns with what many studies show: AI can perform impressively well, but errors are often difficult to spot if you do not actively check. Especially with longer analyses or professional-sounding clauses, the temptation is great to go along with the tone of certainty.

The guidelines turn that temptation into a standard. Legally speaking, only what you as a lawyer accept after verification counts. Responsibility does not shift to the system but remains with the human.

Confidentiality and data flows: professional secrecy in a new era

The most sensitive part of the recommendations concerns confidentiality and data protection. The core is simple but far-reaching: the lawyer must know and control all data flows.

This has practical consequences. The Bar recommends, among other things:

  • Do not use confidential data in free or public applications
  • Share only strictly necessary information and document your reasoning
  • Do not enter confidential or client data into public models
  • Perform an assessment if you process personal data with AI
  • Ensure that input and output remain within the secure firm environment

In the Flemish guidelines, the emphasis is already on pseudonymization and on avoiding open systems for information subject to professional secrecy. The Dutch recommendations are slightly different. They emphasize documentation, formal assessment, and contractual agreements with suppliers.

For firms, this means that incidental experimentation with separate applications no longer fits within a professional standard. Policy is needed, along with an overview of systems used and explicit choices about what may and may not be processed via AI.

Independence and integrity: AI supports, but does not determine

AI systems are designed to be helpful. They supplement, think along, and often confirm the instruction given by the user. This is pleasant in daily use, but risky when it comes to legal advice.

The NOvA therefore warns against applications that tend to confirm the entered instruction, even when it is incorrect. The lawyer must remain alert to this effect and actively test whether the outcome is logical and balanced.

In addition, the Bar requests that firms organize supervision:

  • Establish a firm-wide AI policy
  • Record how monitoring takes place and who is responsible for what
  • Implement a cycle of evaluation and adjustment of AI use

Integrity also has a communicative side. The Bar believes that firms should inform clients about their policy regarding AI. Not by reporting which application was used for every document, but by being clear about the way technology is deployed in the provision of services.

Partiality and bias: why technology is never completely neutral

Partiality means that the lawyer is fully committed to the interests of the client, within the boundaries of the law. AI systems do not know this core value. They are trained on large amounts of text, with all the prejudices and patterns that entail.

The NOvA therefore explicitly links partiality to the risk of one-sided or biased algorithms.

The message:

  • The lawyer remains responsible for lawful but partial representation of interests
  • AI must never take over this role and must not unnoticedly shift to the position of decision-maker

In practice, this requires critical questions when using AI:

  • From which perspective is this text written?
  • Are any interests or risks of the client under-exposed?
  • Are there groups or viewpoints that are structurally missing from the examples provided by AI?

In this way, bias awareness becomes part of the professional standard. Not only in legislation and policy, but also in the daily choice of whether to adopt a proposal from a system.

Comparison with Flanders: same direction, different emphases and nuance

The Flemish Bar previously published guidelines on AI use by lawyers. Comparing both documents reveals clear similarities, but also interesting differences.

Similarities include:

  • AI use is permitted, but never mandatory
  • The lawyer must understand the operation and limitations of AI
  • Confidentiality and data protection are central concerns
  • The lawyer remains ultimately responsible for any form of AI output

The Dutch recommendations place different emphases:

  • Stronger link with continuing education and quality testing
  • More emphasis on data flows, documentation, and formal assessments
  • An explicit request to firms to establish a broad AI policy and inform clients about it

In summary: Flanders focuses primarily on the individual lawyer and professional secrecy. The Netherlands explicitly places AI within firm organization, governance, and supervision.

What firms can now concretely do with the NOvA recommendations

The guidelines are not a theoretical piece. They require concrete choices. For an average firm, the agenda is roughly as follows:

  • Develop a clear AI policy
    Record which applications are permitted, for which tasks, and under what conditions. Clarify which data may never be processed via AI.
  • Organize training and internal knowledge sharing
    Ensure that every lawyer has a basic level of AI knowledge. Have colleagues share examples of cases that went well and situations where AI made mistakes.
  • Review work processes
    Determine for which tasks AI can help, such as creating a first draft, summarizing, or structuring. Establish how human control is organized.
  • Map data flows and suppliers
    Create an overview of all systems in which client information appears, including AI functions. Check contracts for data ownership, security, liability, and the possibility of switching if your supplier changes.

Those who take these steps now will not only better comply with the recommendations but also build a more professional and efficient practice.

Concluding remarks: AI is becoming normal, the core of the profession remains

With the recommendations on AI in the legal profession, the NOvA makes it clear that AI is not a temporary toy, but will become a permanent part of legal work. At the same time, the Bar protects what forms the core of the profession: independent advice, confidentiality, due care, and partial representation of interests.

The challenge for law firms now is to shift from isolated experiments to mature use. This requires policy, training, technical choices, and honest communication, both internally and towards clients.

Those who take that step need not fear AI. It then becomes a powerful assistant, embedded in a solid professional framework. The technology changes, but the standard remains the same: the lawyer who knows what they are doing, who maintains control, and who puts the interests of their client first. Especially in an era with digital colleagues, that is what makes the difference.

 

LegalMike in Action

Every two weeks on Friday afternoons, we organize a digital knowledge session. During these sessions, we demonstrate how to optimally utilize LegalMike in your legal practice, from real-world examples to practical tips.

The next knowledge session will take place on April 10.

Or join directly via Google Meet.