AI Governance Framework: How Legal Teams Can Get It Right
March 2026
By
Axiom Law
AI adoption among legal teams has more than doubled in just one year. According to a joint study from Baker Donaldson and IBM, fewer than a quarter of legal teams were using AI in 2024. By 2025, that number had climbed past half, with more than three-quarters of teams planning to increase their AI budgets in 2026.
That kind of growth is exciting. But it’s also a little terrifying if your organization does not have a plan for how to manage it responsibly.
During an Axiom webinar on AI governance and the legal profession, three in-house legal leaders sat down to talk through what effective AI governance actually looks like in practice. Dorothy Du, General Counsel at Power Solutions International, Ken Priore, Deputy General Counsel at DocuSign, and Steve Drake, VP of Commercial Transactions and Legal at Coherent, each brought a different perspective to the conversation. What came through clearly was this: AI governance is a strategic function, and legal teams are uniquely positioned to lead it.
What Are AI Governance Frameworks?
An AI governance framework is the set of policies, processes, and accountability structures an organization puts in place to manage how AI systems are developed, deployed, and used. It covers everything from acceptable use policies for internal generative AI tools to the standards that govern how AI is embedded in customer-facing products.
At its core, a governance framework is about making sure AI ethics, transparency, and accountability are not afterthoughts. It’s about establishing clear expectations before something goes wrong, not scrambling to put guardrails up after.
That distinction matters. During the webinar, Steve shared how, when he was general counsel at a previous company, a widely reported incident involving engineers who inadvertently exposed sensitive code through a public LLM became an inflection point. The initial reaction was to put a hold on all generative AI usage, explore the situation, set policy, and define guardrails. In hindsight, he believes that the blanket pause was the wrong call. A better-designed governance framework would have allowed teams to keep moving while giving them clear guidance on what was and was not acceptable.
The goal of any AI risk management framework should be to enable responsible AI development, not to shut it down. As the need for AI frameworks grow, companies can benefit from the advice of an artificial intelligence lawyer.
Principles of an AI Governance Framework
The panelists spent considerable time on the building blocks of a governance framework that actually works. A few core principles kept coming up:
Start with data governance
Before you can govern AI effectively, you need to understand your data, and step one is knowing what data you have, where it lives, and how it is classified. AI risk is, in many ways, data risk. What information is being fed into these systems? What categories of data would be most harmful if exposed or misused? How is access controlled? These questions are foundational, and organizations with mature data governance practices have a real head start.
Know whether the use is internal or external
Ken described this as a fork in the road that shapes almost everything else. An internal tool used to streamline workflows carries different governance implications than a generative AI feature embedded in a customer product. External, customer-facing AI raises questions about disclosure, trust, and accountability that internal tooling may not. Customers want to know whether they can trust a product now that AI capabilities are integrated into it, and how that affects the way their data is being used. Governance posture should reflect that distinction.
Apply risk-based scrutiny
Not every AI use case warrants the same level of review. A tool that automates board meeting minutes should have a lighter-touch approval process than an AI system being used to make hiring decisions. Using AI to collect consumer location data and make autonomous decisions on their behalf requires considerably more scrutiny. A well-designed AI risk management framework tiers its requirements based on the sensitivity and potential impact of each use case. The goal is not to slow everything down. It is to make sure the governance effort matches the actual risk.
Build in vendor protections
It’s also important to have clear, standardized AI addenda when engaging outside vendors. These should cover data protection provisions (vendors should not use your data to train their models), deletion requirements if you disengage, data isolation so your information is not commingled with other customers of the same platform, and liability and indemnification language in case something goes wrong. Having these templates ready speeds up adoption and builds security and compliance safeguards directly into the procurement process.
Document decisions and prepare for incident response
If AI-related litigation or regulatory investigation lands on your company's doorstep, documentation is your defense. Dorothy emphasized keeping thorough records of what data is collected, why specific decisions were made, and how AI outputs are used. Alongside that, organizations need an incident response process for AI-specific events, whether that is a data leak, an unexpected output, or a compliance failure.
Train your people
A governance policy that sits in a shared drive and never gets read is not a governance policy. Even if your written framework is excellent, it will not work if employees do not know how to use it, how to flag incidents, or what the rules actually mean in practice. Effective AI governance requires ongoing training across functions.
Importance of an AI Governance Framework
Governance often gets framed as a brake on innovation. In practice, the opposite tends to be true.
When governance is designed thoughtfully and tiered by risk, it actually speeds things up. People know what they need to do to get approval. They know who to go to. They know what information to provide. Clarity accelerates progress. Ambiguity creates the kind of hesitation that actually does slow teams down. Think of it like a well-planned traffic system downtown. The goal is to keep the lights green as much as possible, allowing optimal flow to move through. The point is not to add red lights. It is to make sure the cars do not crash.
There are also straightforward risk management reasons to take this seriously. Regulatory compliance requirements are evolving quickly. The EU AI Act has introduced new obligations based on AI risk classifications. State-level AI legislation in the US is still developing, but organizations that have built accountable AI practices into their operations will be better positioned when those rules arrive. The NIST AI risk management framework offers a useful structure for organizations working through how to approach this systematically.
And then there is trust. Customers, regulators, and employees are all asking whether they can trust the AI-powered products and services they interact with. A credible, well-communicated governance framework is part of the answer to that question. Organizations that can point to transparent accountability structures and responsible AI development practices will have an advantage as scrutiny increases.
Governance Is a Living Process, Not a One-Time Project
One thing the panelists agreed on, even as they brought different perspectives, is that AI governance is not something you complete. It’s an iterative, dynamic process. The technology is changing. The use cases are expanding. Regulatory frameworks are still taking shape. A governance policy that made sense six months ago may have gaps today.
That said, iteration does not have to mean constant reinvention. A well-designed, widely permissive governance policy with clear carve-outs for genuinely sensitive use cases can stay relatively stable even as the landscape evolves, much like how organizations manage ethics and compliance in other areas.
What does change is the cross-functional work required to keep up. It’s important to build a governance committee that includes engineering, IT, data privacy, HR, sales, R&D, and compliance, not just legal. The Baker Donaldson IBM study found that only about a third of respondents said they had effective cross-functional teams in place. That is a gap worth closing.
Because the most important thing your governance committee does is not write policy. It is to surface the things legal does not know yet: a new tool a team is quietly piloting, a use case nobody thought to flag, a process that has drifted outside the guardrails. That feedback loop is how governance stays relevant.
Getting Started: Practical First Steps
For legal teams still early in the process, the panelists had concrete advice.
Form a cross-functional steering committee and conduct an AI use inventory across the organization to understand how AI is actually being used and where the risks sit, and then build a risk-based governance framework from there. Those three steps, done in sequence, will put you on solid ground.
Legal teams should lead by example. Adopt the AI tools that are available to you and be early adopters. Show the organization what thoughtful AI use looks like. When legal is out front rather than playing catch-up, it changes the dynamic.
Get a sandbox environment stood up so teams can experiment with AI tools in a controlled setting, without touching sensitive data, while legal and compliance learn alongside them. That hands-on exposure often surfaces questions and concerns that a written policy never would.
AI is already here. The question is whether your organization is building the governance structures that allow you to use it well, or whether you are operating on hope and handshakes.
A thoughtful AI governance framework does not have to be complicated. It starts with knowing your data, understanding your risk, building the right cross-functional team, and giving people clear guidance they can actually act on. From there, you iterate.
Legal teams are not just compliance gatekeepers in this process. They are well-positioned to be real strategic leaders in helping their organizations adopt AI responsibly and competitively. That is a role worth stepping into.
Posted by Axiom Law
Related Content
AI Contract Review and Analysis: What Legal Teams Need to Know
AI contract review explained: how legal teams use AI to boost speed, accuracy, and scalability, plus what separates successful adoption from failed pilots.
How In-House Counsel Should Negotiate SaaS Contracts
Negotiate SaaS contracts smarter: key clauses, AI risks, liability caps, and strategies in-house counsel need to balance speed, value, and risk.
Continuous Volatility Is the New Normal: Building Corporate Legal Departments for Constant Disruption and Uncertainty
Corporate legal teams must adapt to constant global disruption by building flexible, cost-efficient resourcing models for evolving risk and demand.
Same Problem, One Fix: How a Change Management Framework Can End AI Stall and Law Firm Habit Together
Law firms and AI adoption share the same root problem: change resistance. Learn how the Beckhard-Harris model helps legal teams drive transformation.
What the Quiet Revolution Taught Us
Axiom CRO Sara Morgan on 26 years of ALSP growth: why in-house legal leaders are 3x more satisfied with alternative providers—and what comes next.
The Law Firm Reflex Is Costing You Millions
Axiom CRO Sara Morgan: 61% of legal departments default to law firms when workload spikes, and it's costing them millions. Here's how to break the reflex.
The Real Reason Legal Departments Can’t Change—And What to Do About It
New Axiom research reveals mindset—not budget—is the biggest barrier to legal transformation, and how GCs can close the knowing-doing gap.
Will AI Replace In-House Lawyers? What General Counsel Need to Know
Will AI replace lawyers? Discover how AI is transforming legal work. Learn why human judgment, business acumen, and communication matter more than ever.
What the WSJ $3,400 an Hour Story Really Means for Legal Teams
Premium firms may charge $3,400/hr, but budgets break from rising associate rates. Legal teams need elastic capacity plus AI to control spend.
Best in Class: Study Ranks Axiom #1 Across Key Performance Metrics
Axiom ranks #1 in 8 out of 9 key performance metrics for flexible legal talent providers, demonstrating unmatched expertise, coverage, and cost-effectiveness. Discover why GCs trust Axiom.
ESG Reporting: Full Guide, Standards, and Requirements
Learn what ESG reporting is, key frameworks like GRI and SASB, evolving regulations, and how to build a reporting program that delivers real business value.
Law.com: The CLOUD Act, Encryption and the US-UK Standoff in 2026
The US-UK encryption standoff has trapped tech companies between irreconcilable mandates—in-house counsel must navigate strategic risks when compliance with both jurisdictions becomes impossible.
AI Contract Management: What Legal Teams Need to Know
As legal teams face mounting pressure to do more with less, AI contract management solutions offer a compelling answer, transforming the contract process.
Why 80% of In-House Teams Are Rethinking Their Law Firm Relationships
New research reveals a legal market caught between legacy habits and transformation, with significant implications for how legal work gets done.
State Privacy Laws: 2026 Changes & Compliance
Navigate 2026 state privacy law changes across 15 states. Learn compliance requirements for Indiana, Kentucky, Rhode Island & key CCPA updates.
Why Axiom Outperforms LPO on Quality, Flexibility, and Business Impact
While LPO can solve some problems, it frequently creates new ones. This is where Axiom’s model offers a fundamentally different and better approach.
Finding Professional Confidence, Personal Balance: How Axiom Empowered a Commercial Attorney's Career Transformation
Discover how Axiom empowered commercial attorney Eileen to rebuild her career and confidence while balancing single parenthood after personal tragedy.
- North America
- Expertise
- Must Read
- Legal Department Management
- Work and Career
- Perspectives
- State of the Legal Industry
- Legal Technology
- United Kingdom
- Australia
- Hong Kong
- Singapore
- Artificial Intelligence
- General Counsel
- Central Europe
- Legal Operations
- Solutions
- Regulatory & Compliance
- Data Privacy & Cybersecurity
- Spotlight
- Technology
- Commercial & Contract Law
- Corporate Law
- Global
- Tech+Talent
- Large Projects
- Axiom in the News
- Finance
- Law Firms
- Featured Talent Spotlight
- GC Report
- Healthcare
- Cost Savings
- Intellectual Property
- Videos
- Capital Markets
- Diversified Financial Services
- Labor & Employment
- Secondments
- Budgeting Report
- Commercial Transaction
- Energy
- Investment Banking
- Legal Support Professionals
- Regulatory Response
- Banking
- Construction
- Consulting
- Consumer Packaged Goods
- Financial Services
- Healthcare & Life Sciences
- In-House Report
- Industrial
- Manufacturing
- Materials
- Mergers and Acquisitions
- Pharmaceuticals
- Retail
- Transportation
- Aerospace & Defense
- Automotive
- Business Services
- Consumer Services
- DGC Report
- Education
- Food And Beverage
- Hospitality
- Insurance
- Litigation
- Private Equity
- Professional Services
- Public Sector
- Real Estate
- Specialized Advice
- Telecom
- Utilities
- News
- Recruitment Solutions
Get more of our resources for legal professionals like you.