How to make AI work for you as an organization
“Companies are approaching the AI transformation with incomplete information.”
That is what Ethan Mollick (known for the book Co-Intelligence) writes in an insightful post on his website. According to him, there is a lot of optimism about AI in the workplace, employees are experiencing performance gains, and smart AI agents are appearing that can work independently in various sectors. Yet, remarkably, business results are lagging. Why? Because individual AI performance does not automatically lead to better teams, processes, or organizations. Something else is required: a fundamental transformation of how we work.
AI does work, but not yet broadly enough
Let’s look at the numbers:
- American employees report that AI triples their productivity. Tasks that used to take 90 minutes now take only 30.
- The same study of American employees showed that 40% of workers were using AI in the workplace in April 2025, compared to just 30% a few months earlier.
- Danish knowledge workers say that AI halves the working time for 41% of their tasks.
Yet companies are seeing only limited returns. No significant decrease in working hours. No increase in wages. No revolutionary leaps in efficiency. This may seem contradictory, but it is not. According to Mollick, the reason is simple: AI works at an individual level, but companies have not yet adapted their organizations accordingly.
Why AI requires a different way of organizing
AI is not off-the-shelf software that you simply ‘roll out’. It changes the nature of work. It does not replace jobs, but it does replace tasks. Legal professionals might consider legal research, document analysis, and eventually even client communication. However, according to Mollick, the organization must be structured for such a change. The key lies in three areas: leadership, employees, and the lab. Together, they form the flywheel for a successful AI transformation within an organization.
Leadership
AI begins as a leadership question. It is, of course, positive that CEOs are currently investing a lot of time in AI.
However, employees want more than that; they want to know:
- What will my work look like in six months?
- Will AI gains and employment be distributed fairly?
- What am I allowed to experiment with, and what is off-limits?
Without that clear vision, it remains non-committal. Especially in legal organizations, where caution is the standard, it is crucial that leaders provide space for controlled innovation.
This means:
- Not just warning about AI risks, but also designating zones within a company where experimentation is permitted. Examples of companies where this is already being implemented extensively are Shopify and Duolingo.
- Developing AI skills with a focus on practice and application, not just ethics and regulation. This involves skills such as formulating effective prompts, evaluating AI output, and effectively integrating AI into existing work processes.
- Building trust: employees must feel safe sharing what they are doing with AI internally and perhaps externally, without fear of dismissal or reputational damage. As an organization, you must explicitly state: AI use is allowed, and experimentation is encouraged. Have managers use AI themselves and openly share what works and what doesn’t.
The employee
Real innovation happens on the work floor. It is precisely experienced employees who identify where AI can be useful and where it cannot.
But that only works if they:
- Feel the freedom to use AI.
- Can share their findings with colleagues.
- Know that AI proficiency is valued.
What do we see in practice? Only 20% of employees officially use their employer’s AI tools, while more than 40% use AI secretly. They fear their performance will be downplayed (“that was AI”) or that it will lead to budget cuts.
Leaders must therefore ensure that AI use:
- Is legitimate: provide permission and guidelines.
- Feels safe: no sanctions, but appreciation.
- Is activated: reward employees who use AI smartly.
The lab
In addition to spontaneous innovation from employees, there is also a need for a more structured approach: an internal AI lab.
This doesn’t have to be an R&D department, but it should be a team that:
- Builds rapid prototypes with AI (such as legal workflows).
- Develops benchmarks: what really works well in our context?
- Experiments with new ways of working (such as human-agent teams).
- Creates demos that prompt both thought and action.
- That is given the space to set up mini-labs, for example, within a single area of law.
Furthermore, it is important not to let these labs be run solely by the IT department. Combine AI enthusiasts with legal professionals, policymakers, and process experts.
Five actions you can start with today, according to Mollick
- Assemble an AI task force Combine legal knowledge, technology, and change management. Start small, learn fast.
- Make AI usage visible and discussable Collect and share lessons and mistakes, especially when they are not yet perfect.
- Experiment with different tools Have AI draft documents, and have legal professionals edit them. Analyze where it clashes or works.
- Reward AI innovation Reward employees who find new applications. Provide time, space, and recognition.
- Redesign tasks, not just tools Ask yourself: why do we actually do this work this way? What changes when AI takes over?
Conclusion
AI is not just changing the tools, but the playing field on which we work. The biggest challenge is not technological, but organizational: how do you redesign work in a world where knowledge is available on demand? Those who learn and experiment now are building an advantage. Precisely because no one knows exactly how to do it yet.