.webp)
.jpeg)
This article was originally featured in Today's General Counsel.
Once artificial intelligence (AI) starts behaving like a teammate, we have to think about treating it like one. And that begins with asking ourselves a simple question: How do we delegate legal work to AI agents without delegating responsibility?
Legal technology has been evolving toward this moment for years. Early tools focused on search and extraction, helping lawyers quickly find clauses or pull key data from contracts more efficiently. Then generative AI arrived, bringing assistants that summarized documents, suggested redlines, or drafted clauses on command. These tools were undeniably helpful, but they still relied heavily on lawyers to drive the work forward.
What we are now entering is something much more powerful. AI systems can now complete multi-step workflows and move legal tasks forward in a meaningful way. They can plan, execute, and operate across tools to help legal professionals complete everyday legal work.
In other words, agentic AI can now function less like a tool and more like a teammate.
Traditional AI assistants are reactive. They respond to a prompt and generate an output—often in a single interaction. AI agents, by contrast, can follow entire workflows. They can take several steps, gather information across systems, and make progress toward an outcome without needing constant direction.
In practice, that means legal AI can:
These capabilities allow in-house legal teams to move much faster on the routine but necessary work that consumes a large portion of their day.
The productivity gains are already visible. According to the 2026 State of AI for In-House Legal, 79% of legal professionals report that AI tools save them time by handling tedious tasks—allowing lawyers to focus on work that was previously too time-consuming or too expensive to be practical.
Yet despite these time savings, only 21% of legal professionals report using AI daily as teams continue to determine how to adopt it responsibly.
Delegation is nothing new in the legal profession. Senior attorneys delegate to junior lawyers or business analysts. Legal teams rely on outside counsel or consultants. But the principle has always been the same: The lawyer is responsible for the outcome.
AI delegation must follow the same logic. Lean on the technology to perform parts of the work, but attorneys must supervise, verify, and ultimately stand behind the results.
In my work and with my team, I emphasize three foundational questions that every legal team should ask before introducing AI into their workflows:
Lawyers must remain actively involved in evaluating the outputs, and courts are already reinforcing this expectation. “AI drafted it” is not a defense, and recent cases make that clear. In Mata v. Avianca, attorneys submitted fabricated citations generated by ChatGPT and were sanctioned. In Moffatt v. Air Canada, a chatbot provided incorrect fare information. The airline argued the model made the mistake, but the court disagreed.
If you deploy AI, you own the outcome. Accountability does not shift to the machine.
As AI becomes more embedded in legal workflows, the role of lawyers is evolving. Legal teams today need what I refer to as “legal architects.” These are professionals who can think not only like attorneys but also like system designers.
Legal architects understand both the law and the workflow. They think about which tasks are well-defined enough to delegate, where human judgment is irreplaceable, and how to scale consistency without scaling risk.
They also recognize that not every task belongs to AI. High-value negotiations, sensitive matters, and issues involving privileged or personal data must remain human-led. They establish clear boundaries and build guardrails into each step. Without them, AI can amplify risk just as easily as it improves efficiency.
Legal leaders should pay close attention to how the systems handle data and security. Before deploying any AI system in your legal workflow, make sure your team does the following:
Without these safeguards in place, colleagues don’t check outputs before they reach you and you become the risk bearer.
Agents can dramatically expand the capacity of legal teams. But only when properly supervised.
Consider contract review. In-house lawyers report spending an average of three hours manually reviewing a master service agreement (MSA), making it challenging to keep pace with the volume of agreements moving through modern businesses.
AI agents can help. By turning negotiation guidelines into structured playbooks, applying those standards consistently across contracts, and automatically gathering missing information during intake, these systems allow lawyers to focus their attention where it matters most.
So now the question becomes: Would you rather perfect one agreement every three hours or invest in building a workflow that helps review hundreds consistently and accurately?
Let’s be clear. This is not about removing lawyers from the process. It is about enabling them to spend more time on the work that requires real judgment: negotiations, strategic advice, and complex decision-making.
AI systems should support legal judgment, not replace it. Lawyers still own the outcome. The teams that learn how to manage AI as a teammate—not just a tool—will shape the future of in-house law.