Balancing Innovation and Governance: How Legal Teams Manage AI Risk
November 2025
By
Jacob Flax
The gap between AI risk awareness and action is widening. Recent research reveals that 69% of legal teams identify AI as a major risk concern, yet only 40% have implemented adequate safeguards. This disconnect isn't merely a compliance issue. Instead, it represents a strategic vulnerability that could determine which organizations successfully leverage AI and which struggle with costly missteps.
During a recent legal webinar with Mastercard's APAC legal leadership, a client Axiom has supported across multiple regions for three years, the conversation moved decisively beyond whether to adopt AI to how to implement it safely and effectively. The insights from Sara Morgan, Chief Revenue Officer at Axiom, Natasha Sabnani, Senior Counsel supporting Mastercard's Asia Pacific commercial operations from Singapore, and Rama Lingard, leading the Australasia legal team from Sydney, revealed both the promise and complexity of AI implementation in practice.
💡Discover how Mastercard’s legal leaders are translating AI risk awareness into real-world strategy.
APAC Leading AI Maturity
The Asia-Pacific region is emerging as a significant driver of legal AI adoption. Our research indicates that Singapore leads APAC with approximately 33% AI maturity among legal teams, while Australia is establishing itself as an early adoption hub. This regional leadership reflects several factors: sophisticated technology infrastructure, regulatory frameworks that support innovation, and concentration of multinational headquarters seeking competitive advantage through technology.
Mastercard's position as an early test partner for Microsoft Copilot exemplifies this trend. Operating as what Rama characterized as "a technology company in the payments space," Mastercard invested heavily in AI well before generative AI captured broader attention. Their initial focus on machine learning for fraud detection, processing millions of daily transactions to identify threat vectors and generate risk scores, established both technical capabilities and organizational comfort with AI-driven decision-making.
This foundation proved valuable when deploying AI across legal operations. However, the journey wasn't without challenges.
Implementation Reality: Learning Through Iteration
Rama's candid assessment of early Copilot experiences highlights a common implementation pattern: initial frustration followed by gradual value realization. His attempt to automate 360-degree performance reviews using Copilot failed. The tool confused individuals, ignored character restrictions, and generated identical reviews for different people. The lesson wasn't that AI couldn't help with HR processes, but that expectations needed calibration and use cases required refinement.
This iterative approach—starting with broad ambitions, encountering limitations, and refining to practical applications—characterizes successful AI adoption. Mastercard didn't abandon Copilot after early setbacks. Instead, they identified specific, repeatable tasks where AI delivered measurable value.
Today, Copilot is embedded across Mastercard's Microsoft environment: Word, Excel, Teams, and Outlook. Natasha described several high-value applications:
- Meeting documentation: Copilot generates accurate summaries with action items and can draft follow-up emails to relevant stakeholders, significantly reducing administrative burden.
- Quick legal research: For questions requiring rapid response, Copilot provides initial findings with sources, enabling counsel to determine next steps efficiently.
- Contract drafting support: The tool scans existing documents to provide examples for specific provisions, though counsel must verify sources to avoid confidentiality or IP issues.
- Policy searches: Rather than manually reviewing entire policy databases, lawyers can query Copilot for relevant policies on specific topics.
Critically, none of these applications remove legal judgment from the process. As Natasha emphasized repeatedly, "You need a human in the loop because you do need to validate the findings."
Beyond Copilot, Mastercard is piloting Thomson Reuters' CoCounsel for specialized legal work. The implementation revealed an important insight about AI licensing: These tools don't work effectively when shared across users without individual licenses. Effective use requires teaching the tool through specific, personalized prompts, similar to training an intern. This creates both cost considerations and questions about optimal license allocation across legal teams.
The Governance Imperative
Mastercard's approach to AI governance reflects the complexity required for responsible implementation. The organization established steering committees, including representatives from regulatory, compliance, and data privacy teams. They developed comprehensive AI policies defining acceptable use cases and prohibited activities. They invested in intensive training, with weekly sessions building knowledge databases of effective prompts and examples.
But governance extends beyond policies and training to fundamental questions about legal practice in the AI era:
- Legal privilege concerns: When AI tools automatically record and summarize legal advice during meetings, what happens to privilege protection? In Australia, where in-house counsel privilege is already difficult to establish and easy to lose, AI-generated meeting records create new risk. The convenience of perfect documentation must be weighed against creating discoverable materials that could waive privilege or expose sensitive discussions.
- Information barriers: Rama shared a striking example of governance failure at a major Australian law firm. He received a draft contract clearly incorporating terms he had previously negotiated with a competitor. These were terms the firm had obviously fed into an AI tool without adequate information barriers. Confidential negotiating positions from one client appeared in another client's documents, potentially creating conflicts and certainly raising questions about the firm's data management practices.
- Data segregation: Mastercard insisted on operating AI tools in segregated environments where their data wouldn't train broader models or become accessible to other users. The principle that client data should never be used to improve AI models without explicit consent represents a fundamental governance requirement that organizations must verify with every AI vendor.
- Confidentiality classification: Perhaps the most challenging governance question was raised by Sara: How do you define confidentiality when different people have vastly different interpretations? Mastercard implemented an information classification standard with categories like public, confidential, and highly confidential. Yet even with three categories, implementation proved difficult. People either over-classify routine information or under-classify genuinely sensitive material.
This challenge intensifies in global organizations where cultural norms around information sharing vary significantly. The encryption tools Mastercard added for highly confidential information created new friction when external advisers couldn't access materials they needed to review.
Risk Management Framework
Based on Axiom's evaluation of AI platforms, testing 50 different tools on real-world matters before selecting our core offerings, several critical risk management principles emerge:
- Data protection as a prerequisite: Never implement AI that trains its model on your data without fully understanding implications. Client data segregation isn't a feature; it's a requirement. Vendors must explicitly confirm that your data remains isolated and doesn't improve models accessible to other users.
- Workflow integration: Tools requiring separate web interfaces create security vulnerabilities and adoption friction. Effective AI operates within existing workflows like Word, email, and case management systems rather than requiring lawyers to work across multiple platforms.
- Accuracy verification: All AI outputs require human review. The question isn't whether AI makes mistakes. It does. It’s whether your team has sufficient experience to identify errors. This "human in the loop" requirement means AI augments judgment rather than replacing it.
- Transparency and training: Effective use requires understanding how AI tools work, their limitations, and appropriate applications. Mastercard's weekly training sessions and prompt libraries reflect the level of investment needed. Organizations unprepared to train teams properly aren't prepared to implement AI effectively.
- Privilege preservation: Legal teams must develop protocols for when AI recording and summarization features should be disabled. The documentary trail AI creates grows exponentially, and not all discussions benefit from permanent, searchable records.
Economic Implications
The business case for AI in legal operations is becoming clearer, though results vary significantly based on implementation quality.
- Law firm adoption without client benefit: While 79% of law firms now use AI, only 58% pass savings to clients. This creates both opportunity and tension. In-house teams implementing AI effectively can bring work internally that previously required external counsel. Simultaneously, they can demand cost savings from firms using AI to improve efficiency.
- Natasha was direct about Mastercard's expectations: "If a law firm can really show us that they have thought about using AI to pass on those cost savings to us, I think that will be a big plus point because it would feel like we're both working towards the same common objective."
- Shifting work allocation: Mastercard has already begun the behavioral shift. Work previously sent to law firms—like comprehensive contract risk reviews across large portfolios—is now being evaluated for internal AI-enabled execution. This doesn't eliminate external counsel relationships but focuses them on high-value work requiring sophisticated legal judgment rather than just systematic review and analysis.
- Measurable efficiency gains: Axiom's experience implementing AI-enabled workflows shows efficiency gains of approximately 75% in contract review and due diligence work, and 40-60% in contract negotiation and drafting. On a recent M&A project for a major food brand, we delivered $500,000 in first-phase savings through AI-enabled processes.
- Cost considerations beyond licensing: While AI tools create efficiency opportunities, organizations must consider total implementation costs. Rama noted that Copilot was temporarily disabled during his medical leave due to insufficient usage, highlighting the tension between consistent use requirements for value realization and organizational cost management.
- Additionally, AI generates exponentially more data requiring storage and creating ongoing infrastructure costs. Meeting transcripts, document analyses, and research summaries all require hosting and maintenance.
- Alternative solutions: Not every efficiency challenge requires AI. As Rama emphasized, organizations often already license tools through Microsoft, Google, or other providers that can address specific needs without specialized AI investment. The question should be which tool solves the problem most effectively, not which tool uses the most sophisticated AI.
Strategic Implementation Recommendations
For legal teams navigating AI adoption while managing governance and risk, several strategic approaches emerge from Mastercard's experience and Axiom's broader implementation work:
- Start with defined use cases: Identify specific, repeatable tasks where AI can deliver measurable value. Contract review, due diligence, meeting summaries, and policy searches represent proven applications. Complex legal judgment, relationship management, and nuanced negotiation remain human domains.
- Establish governance before deployment: Develop AI policies, form cross-functional steering committees, and create training programs before implementing tools. Mastercard's approach of involving regulatory, compliance, and data privacy teams in AI decisions should be standard practice.
- Prioritize data protection verification: Explicitly confirm with vendors that your data won't train models, that client information remains segregated, and that you understand data handling practices. This requires going beyond marketing materials and into technical specifications and contractual commitments.
- Invest substantially in training: Effective AI use isn't intuitive. It requires learning prompt engineering, understanding limitations, and developing judgment about when to rely on AI versus human expertise. Weekly training sessions and prompt libraries, like Mastercard implements, reflect appropriate investment levels.
- Develop information classification systems: Before AI generates exponentially more data, establish clear standards for public, confidential, and highly confidential information. Train teams on applying classifications consistently and consider implications for legal privilege.
- Create privilege protocols: Develop guidelines for when AI recording and summarization should be disabled, particularly in discussions involving legal advice, sensitive negotiations, or confidential business strategy. Not every meeting benefits from AI-generated documentation.
- Evaluate external counsel AI use: At the RFP stage, require law firms to demonstrate AI implementation and explain how they'll pass efficiency gains to clients. Make clear that AI-enabled cost reduction represents a competitive advantage in vendor selection.
- Plan for iterative improvement: Expect learning curves, some false starts, and gradual capability development rather than immediate transformation. Organizations succeeding with AI commit to ongoing refinement rather than seeking quick wins.
The Agentic AI Horizon
Rama raised an important consideration about the next evolution: agentic AI that acts autonomously on behalf of individuals or organizations. While current generative AI responds to prompts, agentic AI would independently execute tasks: booking travel, negotiating contracts, or conducting research.
This shift creates profound governance questions. If your AI agent engages in anti-competitive behavior, fraudulent activity, or contractual breaches, who bears legal responsibility? The parallel to self-driving vehicles is instructive. In Australia, drivers remain criminally liable for accidents even when vehicles operate autonomously.
Legal teams must prepare for this evolution by understanding liability frameworks, developing protocols for AI agent oversight, and ensuring governance structures can adapt to increasingly autonomous AI capabilities.
From Complexity to Competitive Advantage
Sara challenged the "death of lawyers" narrative that accompanies each technological advancement. When LexisNexis emerged, lawyers feared research automation would eliminate junior positions. When e-discovery tools appeared, document review seemed destined for obsolescence. Each time, predictions proved incorrect. Technology enabled more thorough work rather than replacing lawyers.
The pattern holds with AI, but with an important caveat: The lawyers and organizations that thrive will be those embracing AI as judgment augmentation rather than replacement. They'll work for organizations that invest in proper governance, training, and risk management rather than rushing deployment without adequate safeguards.
The gap between AI risk awareness and implementation of safeguards represents both vulnerability and opportunity. Legal teams closing this gap now, getting governance right while others remain in planning stages, will build a significant competitive advantage.
The APAC region's leadership in AI maturity also creates an opportunity for organizations willing to learn from early implementers like Mastercard. The conversation has matured beyond hype to practical implementation challenges. Success requires sustained investment, thoughtful governance, and realistic expectations about value realization timelines. The organizations that master this complexity—making it invisible to the businesses they support—will transform AI from a risk concern into a strategic differentiator.
The question isn't whether your legal team should adopt AI; it's whether you're prepared to govern it effectively.
Posted by
Jacob Flax
Jacob Flax is Managing Director and Head of APAC at Axiom, where he helps in-house legal teams improve operational and financial performance through high-quality legal talent and innovative solutions. Previously, he served as Senior Vice President at Gerson Lehrman Group (GLG) Australia and held roles at Bloomberg LP and Deloitte Australia's Financial Advisory Services division.
Related Content
Lawyers Weekly: Reflections on 2025 and APAC Legal Predictions for 2026
Jacob Flax reflects on 2025's shifts in legal operations across APAC, and predicts how 2026 brings accelerated adoption of flexible legal talent, AI, and portfolio-based legal resource management.
Lawyers Weekly: The New APAC General Counsel
As APAC's legal landscape fragments from Hong Kong's centralized dominance, general counsel must master multi-jurisdictional coordination across diverse regulatory systems.
ALM/Law.com: Pulling Back the Curtain on AI Adoption: 3 Questions Law Firms Should be Able to Answer
As AI hype dominates legal headlines, departments must evaluate vendor promises against measurable outcomes to distinguish genuine innovation from performative adoption.
- Expertise
- North America
- Must Read
- Legal Department Management
- Work and Career
- Perspectives
- State of the Legal Industry
- Legal Technology
- United Kingdom
- Artificial Intelligence
- Australia
- Hong Kong
- Singapore
- Spotlight
- Central Europe
- General Counsel
- Solutions
- Legal Operations
- Regulatory & Compliance
- Data Privacy & Cybersecurity
- Technology
- Commercial & Contract Law
- Axiom in the News
- Corporate Law
- Large Projects
- Tech+Talent
- Finance
- Global
- Videos
- Featured Talent Spotlight
- Healthcare
- Budgeting Report
- Cost Savings
- Intellectual Property
- Labor & Employment
- Capital Markets
- Banking
- Commercial Transaction
- DGC Report
- Diversified Financial Services
- Financial Services
- Investment Banking
- Law Firms
- Litigation
- Regulatory Response
- Secondments
- Energy
- In-House Report
- Insurance
- Legal Support Professionals
- Mergers and Acquisitions
- News
- Pharmaceuticals
- Podcasts
Get more of our resources for legal professionals like you.