AI for In-House Legal Teams: A Practical Playbook for Fast, Safe Wins

November 2025
By Axiom Law

AI for In-House Legal Teams Playbook

Over the past several years, legal professionals have seen generative AI move from boardroom buzzword to business imperative, and in-house legal teams are feeling the pressure to adopt—fast. But here's the challenge: Most lawyers didn't go to law school to become AI experts, and the stakes are too high to experiment recklessly with sensitive client data and mission-critical work product.

In a recent webinar on data and AI in legal practice, our Chief Technology Officer CJ Saretto joined Colin Levy (General Counsel at Malbek and Adjunct Professor at Albany Law School), Adam Ehrenworth (Global Business Intelligence Director at Kenvue), and Brandon Track (Principal Corporate Counsel at Cisco Canada) to discuss how legal teams can navigate AI adoption safely and strategically. What emerged was a practical playbook for achieving fast wins without compromising security, quality, or professional responsibility.

Why AI for In-House Legal? Moving Beyond FOMO to ROI

Let's address the elephant in the room: AI adoption in legal is being driven as much by fear of missing out as it is by genuine operational need. As CJ noted during our discussion, a lot of C-suites have gotten serious about AI. They understand this is not just the thing that potentially drives efficiency in the business. It's the thing that company executives know that if they don't ride that wave, they're going to be left behind by companies that do.

But FOMO alone won't deliver results. The legal teams seeing real value from AI are the ones asking a more fundamental question: What problem are we actually trying to solve?

Adam put it perfectly when he recalled advice from his manager over fifteen years ago: “What's the problem you're actually trying to solve here?” This question remains just as relevant today. “If you don't know that, it really doesn't matter what tool you're with or where you're going to make those decisions, especially if you're trying to get some kind of ROI. You're kind of setting yourself up for failure.”

💡Is your legal team tripping over these common pitfalls and obstacles in implementing AI technology?

Step 1: Define the Problem, Not the Tool

The most successful AI implementations begin not with technology selection, but with honest self-assessment. Where does your legal team actually spend its time?

CJ’s advice to colleagues is straightforward: Ask yourself where you spend 50% of your time. And then try to pick AI tools to experiment with the things you do that take up the majority of your time. The logic is simple. If you focus on repetitive, time-consuming tasks, you'll not only see immediate, demonstrable ROI, but you'll also feel better about your work because a burden will be lifted.

Brandon echoed this sentiment from a transactional perspective: “A lot of my day-to-day, when dealing with certain technology transactional deals, is ensuring that AI can actually get what I want, especially from our contract database, or help draft the same clause that I probably drafted at least a few times before is where I try to highlight where AI can first be used.”

The key is specificity. Don't start with “I want AI to help with contracts.” Start with something like, “I spend three hours every week redlining vendor agreements against our standard playbook” or “I waste time searching for precedent language across five different repositories.”

Step 2: Understanding Your Data Landscape

Before you can leverage AI effectively, you need to understand what data you're working with. As Brandon explained, there's a wide variety of AI models –machine learning models requiring labeled data, unsupervised learning models working with unlabeled data patterns, and deep learning models like ChatGPT that can generate human-like language. “It all depends on what kind of data we're dealing with,” he noted.

Legal Data Hygiene: What “Good Enough” Actually Looks Like

Here's where many legal teams get stuck. They assume they need perfect data hygiene before they can touch AI. The reality is more nuanced.

If you're trying to build knowledge retrieval, obviously, you need to give AI access to your data. If you want to ask the robot to help you answer questions about policies or to answer factual questions about what's in executed contracts, no doubt, you need to give it access to those things.

Think about it this way: Legal departments train attorneys to redline contracts using playbooks all the time, but they don't put on their desks every contract they’ve ever executed. They give them a checklist of instructions. Playbooks tend to be concise, with bullet points. And it turns out that if you can get a human to do the right thing with a few pages of bullet points, you can get these LLMs to do it too.

The bottom line on data hygiene? If you're building knowledge management or contract review processes, you need your data to be complete and current. That means:

  • Completeness: Having all executed contracts in one place, not scattered across email, shared drives, and legacy systems
  • Currency: Knowing which agreements are active versus superseded or terminated
  • Consolidation: Bringing together amendments and related documents so the AI can understand the full picture

As Adam noted, “It's almost system technology agnostic from an AI point of view in some ways. It's back to basics from a data management point of view.”

But for many other legal AI applications, like drafting from playbooks, conducting legal research, or generating initial language, you don't need to connect the tool to your entire contract database. You just need clear instructions.

Step 3: Safe Tooling and Governance: The Non-Negotiables

This is where legal teams must be uncompromising. The enterprise-level safeguards that were nice-to-have eighteen months ago are now table stakes.

When evaluating AI tools for legal work, the following are non-negotiables:

  • Data Training Protections: Legal professionals should not be going out there using free ChatGPT with their companies' private information. Every enterprise tool you consider should guarantee that it's not training its product on your data. If it must train on your data, that training should be isolated to your environment only.
  • Base Model Assurances: Most legal AI tools use underlying models from OpenAI, Anthropic, or Google. The vendor should have promises from these providers that their systems aren't getting trained based on your use of the tool.
  • Data Residency and Compliance: Brandon raised an important point that's often overlooked: “You have to specifically look at where any potential data is being housed with GDPR, especially if you have EU-related data.” Data centers in different jurisdictions carry different compliance implications and costs.
  • Human Oversight Requirements: As Adam emphasized, “There's still a lot of sensitivity around whether you want to fully automate versus having some human oversight and control.” This isn't just about comfort; it's about professional responsibility and emerging regulations like the EU AI Act.

The good news? Every product that you can buy from an enterprise licensing perspective at this point in time should give you a guarantee that it's not training the product based on your data. If a vendor can't provide these assurances, walk away.

Step 4: Start Small: Pilot with Ring-Fencing

One of the most powerful features CJ has seen in legal AI tools is the ability to ring-fence data. Tools like Harvey (which calls this functionality "Vault") and Legora (which calls it "Projects") let you create compartmentalized workspaces with specific document sets.

This approach offers several advantages:

  • Controlled Experimentation: You can test AI capabilities on a defined set of contracts or documents without exposing your entire repository.
  • Focused Results: The AI works only with the data you've specifically provided, leading to more predictable and relevant outputs.
  • Gradual Trust-Building: You can start small without trusting them with everything. And then, as you gain more trust, you can go bigger and give them access to more things.

Many modern legal AI tools also offer multiple "database modes." You can point them at:

  • Pre-trained knowledge (what the model knew at its training cutoff)
  • The open internet for current information
  • Specific research libraries (like EDGAR filings or case law databases)
  • Your curated document sets

This flexibility means you can match the data source to the task at hand, rather than taking an all-or-nothing approach.

Step 5: Prove ROI That Leadership Actually Cares About

Here's a reality check: Your general counsel and CFO don't care that AI can summarize a contract in 30 seconds instead of 30 minutes. They care about metrics that impact the business:

  • Cycle time reduction: Are deals closing faster?
  • Throughput improvement: Is your team handling more matters without adding headcount?
  • Error rate reduction: Are you catching issues earlier in review processes?
  • Stakeholder satisfaction: Are sales teams and business units happier with legal turnaround times?

Adam stressed the importance of securing buy-in from all stakeholders early: “Unless you have buy-in from all the people that have to determine what tool is appropriate, what information should be accessed, what shouldn't be accessed... there's definitely pieces that could be roadblocks to getting to that final end state.”

This means involving not just legal leadership but also IT, compliance, operations, and the business teams who will benefit from faster legal review processes.

Common Pitfalls and How to Avoid Them

The Hallucination Problem

Legal AI tools aren't perfect. They make mistakes just like junior associates do. And CJ can't emphasize enough: Never assume that the AI is going to do a perfect thing. You're responsible for the output. It's your work product. The AI is an assistant.

Brandon shared similar concerns: “AI can't fill in the gaps or correct bad or improper information... It is our responsibility still to ensure that the data that we put in is correct, relevant, and ethical.”

The solution? Treat AI outputs like you'd treat work from a first-year associate. Review everything before it goes out the door.

Tool Sprawl and Vendor Whack-a-Mole

Adam described what he called the “whack a mole phenomenon.” One vendor comes up with something, then another throws something new in your face, claiming it's better, and then internal teams want to build their own. “The noise is hard to get through,” he noted.

There are now over 600 AI products marketed specifically to lawyers. The lesson? Resist the urge to constantly chase the newest thing. Pick enterprise-grade tools that solve your identified problems, give them time to deliver results, and only then consider expanding.

The Build-It-Yourself Trap

Eighteen months ago, some legal departments thought building their own in-house ChatGPT clone would give them total control, posing challenges because building and maintaining an AI infrastructure can be difficult.

Brandon, whose company Cisco has built an internal AI tool called Circuit, acknowledged the challenge: “It's hard. Only the largest companies can realistically pull this off.”

Getting Adoption Right: Building a Culture of Smart Experimentation

Technology doesn't fail. Adoption does. The best AI tools in the world won't help if your team won't use them.

Create Safe Spaces for Learning

At Axiom, CJ found that creating opportunities for attorneys to experiment safely is crucial. He recommends two different approaches, depending on your audience:

For personal exploration: AI is going to change the world, and you should absolutely play around with it. If you don't understand how to interact with these new AIs, you may end up stuck in the future.

You can learn a lot without sharing any confidential information. Try asking ChatGPT or Google Gemini to draft an indemnification clause for a hypothetical software licensing agreement in your industry.

For professional application: Use your company-approved tools on real work. If IT and legal have vetted the platform, trust that they've done their due diligence on data security.

Show-and-Tell Sessions

Adam described having “a community of practice where people can go in and ask direct questions. Why isn't this working? Did anyone have this problem? Prompt suggestions, all of that.”

Colin shared that his team at Malbek holds regular sessions where different departments show each other what they're doing with AI: “My marketing team—oh my gosh, some of the things they're doing with AI are way beyond even things I would consider, but it's been incredibly enlightening and helpful.”

Brandon noted that at Cisco, AI adoption has been both organic and encouraged: “People internally are using it, but I truly did not think they would.” The key was making it accessible and demonstrating real value through peer examples.

Training by Example, Not a Checkbox

As Adam pointed out, traditional training is often just “check the box. It's not as engaging as doing some of the independent sessions, finding webinars like this.”

The most effective training combines:

  • Formal introductions to approved tools and guardrails
  • Ongoing peer learning through communities of practice
  • Real examples of time savings and quality improvements
  • Permission to experiment within defined boundaries

💡Take the first step toward an AI strategy that strengthens (not replaces) your legal expertise.

A Final Word: Patience and Persistence

Adam shared a memorable insight from a recent conference he attended. These AI tools are “as bad as they will ever be.” Translation: If the technology can't do what you want today, it most likely will be able to in the not-too-distant future.

The legal industry is at an inflection point similar to when the internet first emerged. We're learning what's possible, what's useful, and what's just hype. The teams that will thrive are those that start experimenting now—thoughtfully, safely, and with clear goals in mind.

Brandon summed it up perfectly: “AI is not about replacing lawyers, but empowering them to focus on higher value tasks. And in a world where clients demand as much bang for their buck when it comes to the billable hour, AI truly can be a catalyst in helping us achieve that.”

Ready to Get Started?

At Axiom, we're helping legal departments navigate AI adoption with the same practical, business-focused approach we bring to all our work. Whether you need help defining your AI strategy, implementing specific tools, or building internal capabilities, our experienced legal professionals can help you achieve fast, safe wins.

The future of legal work is here. Let's make sure you're ready for it.

 

Posted by Axiom Law