Madrid Guidelines for Lawyers on the Use of AI Published
“Our steadfast commitment is that innovation must always serve the law.”
The Madrid Bar Association (ICAM) has published a “Guía de Buenas Prácticas para el uso de la Inteligencia Artificial en la Abogacía”: a practical guide that embraces innovation without abandoning the core values of the legal profession. The message is clear: “use AI without fear, but with discretion,” while maintaining confidentiality, professional judgment, and human rights as fixed anchor points. The guide concretely applies European and Spanish frameworks (including the AI Act and the GDPR regime) to the daily work of lawyers, from document analysis and contract review to firm governance.
Why This Guide Matters
ICAM explicitly positions the legal profession as a leader of digital change, not a follower. Technology supports, but does not replace, the lawyer: human supervision, critical judgment, and heightened due diligence remain paramount. The guide was drafted by ICAM under the coordination of Mabel Klimt (Managing Partner at law firm Elzaburu and responsible for innovation within ICAM), with input from section chairs and external experts.
The Core in Six Pillars
1) Understand the Technology You Deploy
The guide starts with the basics: distinguishing between analytical/predictive AI and generative AI. According to the information in the guide, generative models produce new language or imagery based on patterns but do not understand the law; they can hallucinate and reproduce bias. Therefore, systematic human control is a mandatory part of the workflow.
In Practice:
- Use GenAI for first drafts and summaries, but verify sources and legal accuracy.
- Document your prompts, versions, and corrections so that you can remain accountable.
2) Choose Deliberately: Needs Analysis and Tool Selection
Conduct a needs analysis before use and test solutions extensively. Establish selection criteria (functionality, security, privacy, interoperability, ROI) and verify these during use as well. Preferably choose tools with recognized standards (e.g., ISO/IEC 42001 for AI management, ISO/IEC 27001 for information security, ISO/IEC 27701 for privacy).
In Practice:
- Record what a tool is permitted for, which version you are using, and who is supervising.
- Periodically review whether the tool still meets technical, legal, and ethical requirements.
3) Work in Compliance with AI Regulations: AI Act, GDPR, and More
The guide translates the European framework for the law firm: risk-based deployment, human supervision (AI Act Art. 14), transparency (Art. 13), risk assessment and impact evaluations (Art. 9 and 27), plus compliance with GDPR principles.
In Practice:
- Supervision: always have AI output validated by a lawyer; limit AI for tasks that require exclusively human judgment.
- Transparency: inform clients where appropriate about AI use, limitations, and assumptions; choose explainable models where possible and ensure decision tracking.
- Risk & Impact: conduct prior risk analyses and (where necessary) fundamental rights impact assessments; periodically audit performance and bias.
- Data & Confidentiality: do not enter confidential client data without appropriate safeguards; check processor terms, encryption, and the legal basis under data protection law.
4) Governance, Documentation, and Records
ICAM recommends internal protocols: define permitted tools and purposes, assign responsible parties, regulate risk management, maintain logs, and establish incident and audit procedures.
In Practice:
- Maintain an AI register (tool, version, dataset origin, audit dates, limitations).
- Link governance to data quality and retention periods; use fixed review checklists per product (memo, pleading, contract).
5) Fundamental Rights and Bias Mitigation
AI is not neutral. The guide calls for bias audits, diversity in design and oversight teams, and continuous monitoring.
In Practice:
- Check training data for representativeness; test extensively, specifically for unintended bias.
- Limit the use of AI for critical decisions; ensure additional human review.
6) Competencies, Culture, and Suppliers
Technological competence is a professional obligation. Build a learning culture with training, internal assessments, and interdisciplinary communities (lawyers, technicians, privacy, ethics). External suppliers fall under the same regime: outsourcing does not mean outsourcing responsibility.
In Practice:
- Establish an annual training plan (AI literacy, prompting, fact-checking, tool security, ethics).
- Include concrete guarantees in supplier contracts: security, updates, audit rights, model documentation, exit agreements.
What Dutch Lawyers Can Learn from This
Dutch lawyers can primarily learn from the Madrid guide to organize in advance: do not wait for a Dutch guideline, but establish a firm-wide AI policy now, including a register of permitted tools, clear use cases, mandatory human final review, and periodic audits for quality, bias, and source usage. Make AI literacy part of continuing education, vet suppliers for security, privacy, and explainability, and ensure that confidential data is only processed within closed, well-contracted environments. Recent guidelines from the Order of Flemish Bars point in the same direction: AI is permitted, but the lawyer remains ultimately responsible, guards confidentiality, and checks the output; mandatory transparency to clients is not included there. Together, Madrid and Flanders provide a practical template that Dutch firms can apply immediately.
Conclusion
The core is simple and workable: innovate, but structure. The ICAM approach shows how to integrate AI safely without losing the heart of the legal profession: confidentiality, independent judgment, and diligent legal protection. This starts with a clear AI policy with roles and mandates, an up-to-date tool and data log system, and fixed review checklists per product (advice, pleading, contract). Add to this a risk-based workflow (low, medium, high), explicit “human-in-the-loop” steps, and repeatable quality controls so that every AI contribution remains traceable and explainable.
At the same time, invest in skills: train teams in prompting, source and fact-checking, bias recognition, and privacy hygiene, and set up an internal sandbox to test new tools in a controlled manner. Contract suppliers strictly on security, model documentation, audit rights, and incident obligations; outsourcing does not mean that responsibility disappears.
Finally, look across the border: Flanders confirms the line taken by Madrid and underlines that the freedom to use AI goes hand in hand with an increased professional duty of care. Those who translate these lessons into policy, processes, and training now will benefit from AI, maintain control over quality, and remain—even if there is no Dutch guideline yet—fully within professional ethics.