AI Governance Framework: How Legal Teams Can Get It Right

March 2026
By Axiom Law

AI Governance Framework

AI adoption among legal teams has more than doubled in just one year. According to a joint study from Baker Donaldson and IBM, fewer than a quarter of legal teams were using AI in 2024. By 2025, that number had climbed past half, with more than three-quarters of teams planning to increase their AI budgets in 2026.

That kind of growth is exciting. But it’s also a little terrifying if your organization does not have a plan for how to manage it responsibly.

During an Axiom webinar on AI governance and the legal profession, three in-house legal leaders sat down to talk through what effective AI governance actually looks like in practice. Dorothy Du, General Counsel at Power Solutions International, Ken Priore, Deputy General Counsel at DocuSign, and Steve Drake, VP of Commercial Transactions and Legal at Coherent, each brought a different perspective to the conversation. What came through clearly was this: AI governance is a strategic function, and legal teams are uniquely positioned to lead it.

What Are AI Governance Frameworks?

An AI governance framework is the set of policies, processes, and accountability structures an organization puts in place to manage how AI systems are developed, deployed, and used. It covers everything from acceptable use policies for internal generative AI tools to the standards that govern how AI is embedded in customer-facing products.

At its core, a governance framework is about making sure AI ethics, transparency, and accountability are not afterthoughts. It’s about establishing clear expectations before something goes wrong, not scrambling to put guardrails up after.

That distinction matters. During the webinar, Steve shared how, when he was general counsel at a previous company, a widely reported incident involving engineers who inadvertently exposed sensitive code through a public LLM became an inflection point. The initial reaction was to put a hold on all generative AI usage, explore the situation, set policy, and define guardrails. In hindsight, he believes that the blanket pause was the wrong call. A better-designed governance framework would have allowed teams to keep moving while giving them clear guidance on what was and was not acceptable.

The goal of any AI risk management framework should be to enable responsible AI development, not to shut it down. As the need for AI frameworks grow, companies can benefit from the advice of an artificial intelligence lawyer

Principles of an AI Governance Framework

The panelists spent considerable time on the building blocks of a governance framework that actually works. A few core principles kept coming up:

Start with data governance

Before you can govern AI effectively, you need to understand your data, and step one is knowing what data you have, where it lives, and how it is classified. AI risk is, in many ways, data risk. What information is being fed into these systems? What categories of data would be most harmful if exposed or misused? How is access controlled? These questions are foundational, and organizations with mature data governance practices have a real head start.

Know whether the use is internal or external

Ken described this as a fork in the road that shapes almost everything else. An internal tool used to streamline workflows carries different governance implications than a generative AI feature embedded in a customer product. External, customer-facing AI raises questions about disclosure, trust, and accountability that internal tooling may not. Customers want to know whether they can trust a product now that AI capabilities are integrated into it, and how that affects the way their data is being used. Governance posture should reflect that distinction.

Apply risk-based scrutiny

Not every AI use case warrants the same level of review. A tool that automates board meeting minutes should have a lighter-touch approval process than an AI system being used to make hiring decisions. Using AI to collect consumer location data and make autonomous decisions on their behalf requires considerably more scrutiny. A well-designed AI risk management framework tiers its requirements based on the sensitivity and potential impact of each use case. The goal is not to slow everything down. It is to make sure the governance effort matches the actual risk.

Build in vendor protections

It’s also important to have clear, standardized AI addenda when engaging outside vendors. These should cover data protection provisions (vendors should not use your data to train their models), deletion requirements if you disengage, data isolation so your information is not commingled with other customers of the same platform, and liability and indemnification language in case something goes wrong. Having these templates ready speeds up adoption and builds security and compliance safeguards directly into the procurement process.

Document decisions and prepare for incident response

If AI-related litigation or regulatory investigation lands on your company's doorstep, documentation is your defense. Dorothy emphasized keeping thorough records of what data is collected, why specific decisions were made, and how AI outputs are used. Alongside that, organizations need an incident response process for AI-specific events, whether that is a data leak, an unexpected output, or a compliance failure.

Train your people

A governance policy that sits in a shared drive and never gets read is not a governance policy. Even if your written framework is excellent, it will not work if employees do not know how to use it, how to flag incidents, or what the rules actually mean in practice. Effective AI governance requires ongoing training across functions.

Importance of an AI Governance Framework

Governance often gets framed as a brake on innovation. In practice, the opposite tends to be true.

When governance is designed thoughtfully and tiered by risk, it actually speeds things up. People know what they need to do to get approval. They know who to go to. They know what information to provide. Clarity accelerates progress. Ambiguity creates the kind of hesitation that actually does slow teams down. Think of it like a well-planned traffic system downtown. The goal is to keep the lights green as much as possible, allowing optimal flow to move through. The point is not to add red lights. It is to make sure the cars do not crash.

There are also straightforward risk management reasons to take this seriously. Regulatory compliance requirements are evolving quickly. The EU AI Act has introduced new obligations based on AI risk classifications. State-level AI legislation in the US is still developing, but organizations that have built accountable AI practices into their operations will be better positioned when those rules arrive. The NIST AI risk management framework offers a useful structure for organizations working through how to approach this systematically.

And then there is trust. Customers, regulators, and employees are all asking whether they can trust the AI-powered products and services they interact with. A credible, well-communicated governance framework is part of the answer to that question. Organizations that can point to transparent accountability structures and responsible AI development practices will have an advantage as scrutiny increases.

Governance Is a Living Process, Not a One-Time Project

One thing the panelists agreed on, even as they brought different perspectives, is that AI governance is not something you complete. It’s an iterative, dynamic process. The technology is changing. The use cases are expanding. Regulatory frameworks are still taking shape. A governance policy that made sense six months ago may have gaps today.

That said, iteration does not have to mean constant reinvention. A well-designed, widely permissive governance policy with clear carve-outs for genuinely sensitive use cases can stay relatively stable even as the landscape evolves, much like how organizations manage ethics and compliance in other areas.

What does change is the cross-functional work required to keep up. It’s important to build a governance committee that includes engineering, IT, data privacy, HR, sales, R&D, and compliance, not just legal. The Baker Donaldson IBM study found that only about a third of respondents said they had effective cross-functional teams in place. That is a gap worth closing.

Because the most important thing your governance committee does is not write policy. It is to surface the things legal does not know yet: a new tool a team is quietly piloting, a use case nobody thought to flag, a process that has drifted outside the guardrails. That feedback loop is how governance stays relevant.

Getting Started: Practical First Steps

For legal teams still early in the process, the panelists had concrete advice.

Form a cross-functional steering committee and conduct an AI use inventory across the organization to understand how AI is actually being used and where the risks sit, and then build a risk-based governance framework from there. Those three steps, done in sequence, will put you on solid ground.

Legal teams should lead by example. Adopt the AI tools that are available to you and be early adopters. Show the organization what thoughtful AI use looks like. When legal is out front rather than playing catch-up, it changes the dynamic.

Get a sandbox environment stood up so teams can experiment with AI tools in a controlled setting, without touching sensitive data, while legal and compliance learn alongside them. That hands-on exposure often surfaces questions and concerns that a written policy never would.

AI is already here. The question is whether your organization is building the governance structures that allow you to use it well, or whether you are operating on hope and handshakes.

A thoughtful AI governance framework does not have to be complicated. It starts with knowing your data, understanding your risk, building the right cross-functional team, and giving people clear guidance they can actually act on. From there, you iterate.

Legal teams are not just compliance gatekeepers in this process. They are well-positioned to be real strategic leaders in helping their organizations adopt AI responsibly and competitively. That is a role worth stepping into.

Posted by Axiom Law