Artificial intelligence with human behavior: a game changer for legal professionals
“Our AI participants gave responses that were indistinguishable from real consumers, and in many cases even richer and more consistent.”
Imagine: an AI model that can put itself in a consumer’s shoes so well that it reacts just as predictably as a real person. Not science fiction, but reality. Researchers from PyMC Labs and Colgate-Palmolive succeeded in replicating consumer purchase intent with GPT-4o and Gemini 2.0 with an impressive accuracy of 90%.
For jurists and lawyers, this is more than just marketing news. This research shows that AI not only masters language but also manages to simulate convincing behavior and context. What does that mean for legal decision-making, legal simulations, and the use of AI as a ‘legal person’? And how can you use this to achieve better policy, sharper analyses, and more effective legal services?
AI simulating behavior: the study design
Researchers conducted 57 surveys on personal care products, involving more than 9,000 human responses. They then gave the same questions to AI models, which formulated answers based on demographic information, textual prompts, and product descriptions (including images).
The results were astounding:
- AI predicts buying behavior with 90% accuracy
In 90% of cases, the AI models reached the same conclusions as human consumers about whether or not they would buy a product. This was achieved because the AI was not directly asked for a rating, but was first allowed to respond in free text. That response was then compared with sample answers from real people. - AI responses closely resemble those of humans
The AI gave a wide spread of responses, just as humans do. There was no unnatural tendency to always answer neutrally or averagely, something that often happens with AI when you ask for figures directly. - AI provides in-depth and meaningful feedback
The AI’s responses were not only longer but also substantively richer than those of human respondents. Where humans often settle for short comments like “looks good,” AI models provided extensive analyses on ease of use, pricing, reliability, and even potential objections.
How is this possible? By not asking for a score directly, but first for a textual response. This was then compared with sample answers corresponding to a five-point scale.
This method yielded not only better predictions but also fine-grained insights into how consumers think and feel about a product. AI proved more sensitive to nuance in matters such as brand perception, usage situation, and price consciousness. In this way, the technology came remarkably close to how real people make decisions.
Why this is important for lawyers
This study touches on a core question: can AI truly empathize with people? If a model can convincingly mimic consumer behavior, what does that mean for other domains where empathy, behavioral analysis, or intent are central?
Consider, for example:
- Legal simulations: AI models that mimic the behavior of parties in mediation or negotiations.
- Behavioral testing in compliance: synthetic ’employees’ or ‘customers’ testing policies for clarity and effectiveness.
- Bias analysis: models that calculate scenarios with different ethnicities, ages, or income levels.
- Justice analysis: how different groups respond to regulations, contract terms, or sanctions.
For legal practice, this means that AI works not only as a calculator or text generator but also as a proxy for human reasoning and behavioral expectations. You can use AI as an explorer of behavioral risks, as a virtual test jury, or as a simulated conversation partner in negotiations.
Four legal points of attention for behavior-mimicking AI
- Evidentiary value of synthetic simulations
If AI can behave like a consumer with this level of accuracy, the question arises: can synthetic behavioral data serve as evidence in market analysis, risk assessment, or even in legal proceedings? Especially in cases where group behavior or expectation patterns are relevant, this offers interesting possibilities. - Liability and behavior
If an AI agent can itself exhibit behavior that simulates human reactions, how do we deal with incorrect, misleading, or discriminatory synthetic behavior? And who is responsible if an AI simulation leads to a wrong decision or an incorrect prediction? - Regulation of ‘synthetic subjects’
The legal frameworks for using AI in research are still vague. Should consent be sought from real respondents if their behavior is being mimicked? What about the privacy of training data? What does this mean for the GDPR? - AI as a legal actor
The discussion on AI legal personality receives new impetus from this. If a model can credibly behave as ‘the average consumer,’ it is not far from applications where AI itself is given a decision-making role. Think of compliance systems that independently analyze internal reports or algorithms that assess competition risks.
Will AI become the new legal test group?
Perhaps. AI models can replace or supplement legal focus groups or client panels when testing policies, contract texts, guidelines, or communications.
The benefits are clear:
- Cost savings: no recruitment of respondents, no fees, no waiting time. Legal teams can get started immediately.
- Faster iterations: clauses, letters, or policy rules can be tested for clarity or acceptance within hours.
- Deeper feedback: AI writes more than “it seems okay to me” and specifically indicates where something comes across as legally unclear, confusing, or implausible.
- Higher reliability: AI responds consistently and without socially desirable answers, which is a known problem with human panels.
But there are also limitations that legal professionals must take into account:
- The validation took place within one commercial sector. It is uncertain whether AI can also simulate complex legal concepts (such as reasonableness, balancing of interests, or contractual intent) just as convincingly.
- The personas used are based on historical data. As a result, biases, incompleteness, or a lack of diversity may occur in the simulated responses.
- The method requires both legal and technical expertise: interpreting the AI output requires critical assessment, as does the careful selection of reference answers.
- The legal status of AI feedback is unclear. For example, may a lawyer rely on a synthetic client panel? And how much weight does such a test result carry in dispute resolution or compliance considerations?
In short: AI as a legal test group is promising, especially for the pre-validation of policies, contracts, or communication. However, it requires a clear deployment strategy, transparency about the method, and legal safeguards that align with ethics and liability.
What you can do with AI behavior simulation tomorrow
- Test your legal texts for clarity
Have an AI model respond to a contract clause or policy rule as if it were a client. Do you get an understandable answer back? Or does it remain vague or incorrect? This way, you can immediately see where your text is unclear, too complex, or open to multiple interpretations. This works excellently for compliance texts, general terms and conditions, and explanatory notes on legislation.
- Use AI as a ‘preliminary sounding board’ for new policies
Do you want to introduce a new working method or policy line? Have AI respond to your proposal from different personas: an employee with limited digital skills, a customer with little trust, or an entrepreneur under time pressure. This way, you discover where resistance may arise or what needs to be explained even better.
- Establish simple rules of conduct for AI applications in your team
Ensure everyone knows where the boundaries lie. For example, agree that AI output is always checked by a lawyer, that you never enter confidential client data, and that synthetic behavior is never used as factual evidence without additional human judgment.
- Start small with training and awareness
Organize a lunch session where you and your colleagues look at how AI clients respond to legal letters or decisions. Or task interns with analyzing AI behavior for relevance and nuance. This way, you quickly develop a feel for what AI can do and where you need to stay sharp.
With these steps, you make the technology tangible and useful without immediately overhauling your entire practice. Just start tomorrow, with common sense and legal precision.
Concluding remarks: from legal analysis to behavioral insight
This study shows that AI not only expresses itself well but also understands well how people think and act. For lawyers, that is both a wake-up call and an opportunity. Because if AI can convincingly simulate human intent, preference, and behavior, we can use it as a test subject, a behavioral mirror, and even as a legal thinking model.
Those who learn how to use this power responsibly will have a powerful assistant at their disposal. Not a replacement, but a new lens through which to view human behavior and legal effects. And a new tool to pre-test policies, contracts, and communication for effect and clarity.