General counsel worldwide increasingly positive about the use of AI

“Generative AI is not a hype, but a fundamentally transformative technology that is reshaping legal practice.”

AI gaining ground within legal departments

An increasing number of general counsels are taking action on generative AI. According to The General Counsel Report 2025, prepared by FTI Consulting and Relativity, 44% of general counsels surveyed now indicate that their legal teams actually use AI. In 2023, this was only 20%. Adoption is accelerating, and AI is evolving from experimentation into a structural part of the legal workflow.

At the same time, concerns remain regarding reliability, consistency, and legal implications. In practice, it is a balancing act: harnessing the potential on one hand, while managing the risks on the other. But to what extent are legal departments embracing generative AI? Where do the greatest opportunities and risks lie? How well-prepared are organizations for this technological change? And how prepared do General Counsels feel to deploy this technology responsibly?

Broad research among General Counsel worldwide

The research is based on 34 in-depth interviews with general counsels of international companies, supplemented by a quantitative survey of 207 general counsels in twelve countries. Most respondents work at organizations with more than $500 million in revenue.

Cautious confidence in practical applications

General counsels are most comfortable with AI for relatively defined tasks, such as document review, e-discovery, and contract analysis. Specifically, this involves, for example, the automatic identification of high-risk clauses in contracts, categorizing emails for due diligence investigations, or summarizing large volumes of legal documents prior to a case or internal audit. The drafting of standard agreements, checking language usage, and extracting contract data (such as duration or notice periods) are also mentioned as applications for which AI is now widely used.

A prerequisite, however, is that there is always human oversight. “I am completely at ease with AI, as long as there is sufficient supervision by a competent lawyer,” said one of the interviewees.

The report shows that general counsels feel particularly uncomfortable using AI in sensitive contexts such as internal investigations: 47% of respondents rate their comfort level here as a 1 or 2 (on a scale of 1 to 5). Although the report does not mention a specific figure for compliance, the interviews show that hesitation prevails here as well. Concerns focus particularly on the risk of incorrect interpretations, lack of explainability, and potential reputational damage. In compliance contexts, AI is therefore primarily considered for supporting tasks, such as tracking new regulations or identifying inconsistencies in policy texts. However, for drawing conclusions or making decisions, human control remains essential.

AI usage growing faster than preparation

Although the use of AI is rising, 85% of general counsels still feel minimally or not at all prepared for its risks — a slight improvement from 93% a year earlier. At the same time, 75% of legal departments do not have a technology roadmap. Investments in AI are therefore taking place without clear strategic frameworks.

Nevertheless, AI is on the implementation list of 30% of departments planning to purchase new technology in 2025. This usually involves applications aimed at low-risk, high-volume tasks. Examples include drafting standard contracts, checking grammar and consistency in legal documents, and automatically populating clauses in model agreements.

AI tools are also being used to extract specific data from large volumes of contracts. This includes durations, termination provisions, or obligations, as well as summarizing legal documents into understandable language for internal stakeholders. Additionally, the report mentions the use of AI for monitoring legislative changes, categorizing documents within due diligence processes, and generating first drafts of policy documents or internal memos.

Concerns about accuracy and IP rights

The primary concerns of general counsels regarding AI are clear:

  1. Inaccurate or inconsistent output (hallucinations)
  2. Limited explainability of AI decisions
  3. Security risks
  4. Copyright and IP issues
  5. Inherent bias

Regional differences in acceptance

The differences per region are striking. In Latin America, confidence in AI for e-discovery is highest: 86% of respondents indicate they feel comfortable with it. In Asia-Pacific, that percentage is significantly lower, at just 55%. For applications such as internal investigations, the global comfort level is lowest: on average, only 47% of General Counsel feel comfortable with this.

The report thus demonstrates that attitudes toward AI vary significantly by region and application area. Local legal cultures, compliance frameworks, and risk perceptions play an important role in this. What is seen as efficient innovation in one region may be considered too risky or unreliable elsewhere. For internationally operating legal teams, it is crucial to take this into account when deploying AI.

AI only effective with oversight and policy

AI offers clear benefits for legal departments: speed, scalability, and support for repetitive tasks. However, without oversight, policy, and training, its use can quickly become counterproductive.

Organizations that want to deploy AI seriously must invest in:

  • Internal and external training regarding the use of AI
  • Clear AI governance and risk management
  • Cross-functional collaboration between legal, IT, and compliance
  • Practical testing and controlled rollout phases of AI applications such as LegalMike

Conclusion

Generative AI is making a structural advance within legal departments. Usage is growing rapidly, particularly for defined tasks such as document analysis, e-discovery, and contract review. At the same time, The General Counsel Report 2025 shows that many legal teams feel insufficiently prepared for the risks of this technology. A clear strategic framework is often lacking.

Organizations that wish to deploy AI in a legally responsible manner must, however, invest in policy, oversight, and training. Generative AI only adds value if it is carefully embedded into existing legal processes — with human control as a prerequisite.

For general counsels, there is a clear task here: not only to assess AI for risks but also to actively utilize it as a strategic tool. Not as a replacement for legal expertise, but as a reinforcement of it. The legal departments that dare to invest in this will set the tone for the coming years.

 

LegalMike in Action

Every two weeks on Friday afternoons, we organize a digital knowledge session. During these sessions, we demonstrate how to optimally utilize LegalMike in your legal practice, from real-world examples to practical tips.

The next knowledge session will take place on April 10.

Or join directly via Google Meet.