EU AI Act: Six Insights into Its Global Implications
February 2024
By
Axiom Law
The European Union’s Artificial Intelligence (AI) Act is the world’s first comprehensive legal framework for using AI worldwide.
It’s meant to clarify various legal and ethical issues surrounding the use of generative AI, with limitations and safeguards to ensure fundamental rights are protected. It also aims to provide a legal framework for businesses and organizations using these tools to encourage innovation among small and medium-sized organizations in the EU.
The legislation applies to all 27 member states, with global implications for every industry, regardless of whether they're in Continental Europe. It sets the stage for ongoing AI legislative efforts and could potentially influence legal practices and considerations in the U.S.
In this month’s Higher Bar webinar, Axiom's own Daniel Hayter joined Dasein Privacy Managing Director Lucy McGrath, Axiom lawyer Stefania Quintaje, and Stealth Startup-AI Contract Intelligence General Counsel Jason Mark Anderman, to discuss these takeaways and more.
Here are six major takeaways:
1. Engage with Business Units to Identify Risks
The legal statutes for using artificial intelligence are slowly coming into place, and this landmark piece of legislation from the European Parliament is only the beginning. Innovation always outpaces the legislative process, with new technologies coming online before businesses have legal roadmaps for using them responsibly.
This confusion can prevent some companies from experimenting with AI out of an abundance of caution, which could put them at a disadvantage. Other organizations may hold off on allowing legal departments and employees to use these tools until there are clear laws in place in their target market. Using generative AI without established legal protocols inevitably increases a corporation’s risk profile, but hesitance can also reduce innovation and efficiency as the company’s competitors pull ahead.
Legal experts recognize this dilemma, but the absence of legal clarity doesn’t warrant inaction. Companies should start assessing their potential risk exposure using the restrictions and guidelines outlined in the EU AI Act, even though equivalent legislation has yet to pass in the U.S.
“One of the things that I've seen more is when we will analyze better the regulation applicable to that, for example, that there is still some open question about copyright, about intellectual property, about what we can really implement inside the company without the risk to open the big challenges, about what was previously owned by someone else or delivering something that is not completely free,” noted Quintaje. “We need to understand which type of data we can use, which type of things are still valid, which type of process are okay or needs to be rediscussed internally by an internal panel inside your company to understand what you can do to have good foundation for the future.”
Regardless of a company’s policies regarding the use of AI, chances are someone within the organization is using generative tools and general purpose AI models like ChatGPT.
Identifying the potential risks keeps the company abreast of how its assets and reputation may be affected now that there are concrete regulations on the books in one of the largest markets in the world. This risk analysis also gives the company more time to prepare a response once new legislation is passed. The legal team will understand how future laws will affect the company as they work their way through Congress.
The EU AI Act may not be the end-all-be-all in terms of regulation, but it sets clear restrictions for using “high-risk AI systems” in specific industries. Banned applications of generative AI include:
- Remote biometric identification systems and classification based on sensitive characteristics, such as political, religious, philosophical beliefs, sexual orientation, or race
- Untargeted facial recognition using public image databases
- Employee and student emotional recognition
- Social scoring based on behavioral characteristics
- User manipulation designed to deprive them of free will
- Identifying user vulnerabilities based on age, race, disability, economic or social status
- Law enforcement that may interfere with people’s fundamental rights
Companies should start incorporating these provisions into their operations and continue to do so as new regulations come into the fold. Some usages may warrant immediate legal action, including obvious violations, while others can simply be noted for future deliberation. Some firms may want to categorize flagged instances based on the potential threat level. For example, some usages and applications might fall somewhere between approval and prohibition.
The size and focus of the company will affect the scope of the compliance assessment, with larger entities and those that specialize in higher risk areas identified by the EU Artificial Intelligence Act, such as social scoring, needing extensive analysis and legal expertise.
2. Use Past Examples to Predict Future Legal Outcomes
Without a crystal ball, predicting the future of AI legislation can feel like a fool’s errand, but legal firms and companies can look to similar data privacy requirements and regulations that have come into effect over the last few years to get a sense of where things might be headed. The General Data Protection Regulation (GDPR) is perhaps the closest parallel.
Delegating the adoption to individual EU member states could delay the rollout, with implementation happening in waves, as it did with the GDPR, first published in 2016. Competing regulations and policies could also unearth unforeseen privacy issues in years to come.
Buy-in from industry groups may also help predict the success of new regulations. Government-appointed committees of technology experts helped lay the groundwork for the GDPR’s success. We may see similar initiatives in the U.S. and EU regarding generative AI legislation as governments run up against their lack of expertise on the subject.
3. Put AI Regulation into Perspective as It Relates to Existing Laws
Looking at several AI legal case studies from Europe, the new EU AI Act will intersect with various established legal precedents. Companies should begin assessing how these regulations could spill over into other concerns. For example, generative AI could produce information about a person that violates the GDPR. These kinds of issues could expose firms to additional liabilities.
In one case study, an algorithm created risk profiles targeting specific groups. The company used these recommendations to penalize low-income households in specific ethnic groups on suspicion of fraud, resulting in further economic hardship for thousands of families. This use case would violate the banned applications and established privacy laws.
AI tools could, in theory, be used to reveal legally protected financial and medical data. Firms should consider how the new regulations fit into the existing legal ecosystem, such as the GDPR, Health Insurance Portability and Accountability Act (HIPAA), and other privacy regulations.
McGrath notes how many responsibilities providers must consider and poses that they consider the following: “What your role with AI is in the same way that you thought about it with GDPR or HIPAA or the California Consumer Privacy Act (CCPA), and how you deploy it?” She explains that “this is going to build on complementing a lot of your current sort of supply chain requirements. Look at where transparency is involved and how you deploy a chat bot on your website. Make sure that you’re telling people that they're not speaking to a human.”
Compiling the company’s risk profile under these considerations requires a multi-disciplinary approach, with legal teams from diverse backgrounds meeting to discuss the client’s potential vulnerabilities.
4. Increase Demand for Certification Requirements
To increase innovation and quality assurance, the new framework carves out regulatory “sandboxes” for small and medium-sized enterprises looking to develop and test artificial intelligence programs without undue pressure from the dominant suppliers.
The current market is divided into two groups: suppliers like Microsoft and OpenAI that license generative AI tools and users, i.e., corporations and businesses who use these algorithms. Lesser-known developers will and should be able to offer AI services to rival those of the major players, but users will need assurance that using these products won’t subject them to additional risk.
We will likely see more dispute prevention and resolution (DPR) laws appear in various jurisdictions regarding the use of AI, and developers will adapt their products to these regulations to guarantee their customers’ compliance.
Like with ISO 27001 for cloud infrastructure, Anderman believes the industry will “start seeing companies insisting on having shared certification requirements” to provide clarity for developers and users, including various data governance requirements, risk management requirements, and transparency requirements.
However, obtaining an ISO certification is often expensive and time-consuming, which could put small and medium sized enterprises at a disadvantage. This requirement may consolidate the number of software providers in the market, as we saw with cloud infrastructure, now dominated by Amazon Web Services, Azure from Microsoft, or Google. The leading players will give their customers, particularly startups, some much-needed comfort and peace of mind by building compliance into their services.
5. Plan for a Lack of Consensus
The EU AI regulations apply to industries worldwide, but other jurisdictions will inevitably enact their own rules, creating a hodgepodge of different requirements. Within the EU alone, who will enforce and implement these requirements in the individual member states remains to be seen. Spain, for example, says it wants to establish its own AI governmental agency in addition to the European Commission’s AI Office, the central EU agency designated to guide governing bodies in the 27 countries.
The U.S. has been particularly slow in passing federal AI regulations due to a lack of consensus in Congress, but 11 states have passed DPR-type laws thus far.
On the other hand, Chinese leaders recently came out opposed to all AI regulation, removing all safeguards for anyone doing business there. Some worry AI software developers may flock to jurisdictions with fewer regulations to reduce the risk of violation.
But that will put these players in a precarious situation in today’s increasingly globalized society, where marketing campaigns and SaaS tools transcend national borders.
Major AI software companies like OpenAI will likely cater to a more international business clientele by complying with regulations across multiple jurisdictions in different countries. As two of the most lucrative markets in the world, the U.S. and EU will inevitably help shape the standards and requirements for these products.
6. Put the Right Legal Talent in Place
In-house legal teams may not have the right background to deal with the complexity of these issues. Most professionals focus on compliance in relation to specific legal requirements, but AI combines various subjects and concerns. Companies should consider hiring outside legal counsel across multiple practice areas, including artificial intelligence lawyers with a background in AI and privacy compliance, to support their existing operations.
💡 Prepare for the EU AI Act and future regulations on Artificial Intelligence.
Posted by Axiom Law
Related Content
Continuous Volatility Is the New Normal: Building Corporate Legal Departments for Constant Disruption and Uncertainty
Corporate legal teams must adapt to constant global disruption by building flexible, cost-efficient resourcing models for evolving risk and demand.
Same Problem, One Fix: How a Change Management Framework Can End AI Stall and Law Firm Habit Together
Law firms and AI adoption share the same root problem: change resistance. Learn how the Beckhard-Harris model helps legal teams drive transformation.
What the Quiet Revolution Taught Us
Axiom CRO Sara Morgan on 26 years of ALSP growth: why in-house legal leaders are 3x more satisfied with alternative providers—and what comes next.
The Law Firm Reflex Is Costing You Millions
Axiom CRO Sara Morgan: 61% of legal departments default to law firms when workload spikes, and it's costing them millions. Here's how to break the reflex.
AI Governance Framework: How Legal Teams Can Get It Right
AI governance framework guide for legal teams: risk-based AI policies, data governance, vendor safeguards & compliance best practices.
The Real Reason Legal Departments Can’t Change—And What to Do About It
New Axiom research reveals mindset—not budget—is the biggest barrier to legal transformation, and how GCs can close the knowing-doing gap.
Will AI Replace In-House Lawyers? What General Counsel Need to Know
Will AI replace lawyers? Discover how AI is transforming legal work. Learn why human judgment, business acumen, and communication matter more than ever.
What the WSJ $3,400 an Hour Story Really Means for Legal Teams
Premium firms may charge $3,400/hr, but budgets break from rising associate rates. Legal teams need elastic capacity plus AI to control spend.
Best in Class: Study Ranks Axiom #1 Across Key Performance Metrics
Axiom ranks #1 in 8 out of 9 key performance metrics for flexible legal talent providers, demonstrating unmatched expertise, coverage, and cost-effectiveness. Discover why GCs trust Axiom.
ESG Reporting: Full Guide, Standards, and Requirements
Learn what ESG reporting is, key frameworks like GRI and SASB, evolving regulations, and how to build a reporting program that delivers real business value.
Law.com: The CLOUD Act, Encryption and the US-UK Standoff in 2026
The US-UK encryption standoff has trapped tech companies between irreconcilable mandates—in-house counsel must navigate strategic risks when compliance with both jurisdictions becomes impossible.
AI Contract Management: What Legal Teams Need to Know
As legal teams face mounting pressure to do more with less, AI contract management solutions offer a compelling answer, transforming the contract process.
Why 80% of In-House Teams Are Rethinking Their Law Firm Relationships
New research reveals a legal market caught between legacy habits and transformation, with significant implications for how legal work gets done.
State Privacy Laws: 2026 Changes & Compliance
Navigate 2026 state privacy law changes across 15 states. Learn compliance requirements for Indiana, Kentucky, Rhode Island & key CCPA updates.
Why Axiom Outperforms LPO on Quality, Flexibility, and Business Impact
While LPO can solve some problems, it frequently creates new ones. This is where Axiom’s model offers a fundamentally different and better approach.
Finding Professional Confidence, Personal Balance: How Axiom Empowered a Commercial Attorney's Career Transformation
Discover how Axiom empowered commercial attorney Eileen to rebuild her career and confidence while balancing single parenthood after personal tragedy.
The AI Paradox: Why Your Legal Team's Productivity Gains Are Fueling a Retention Crisis
93% of legal professionals say AI boosts productivity, yet 76% fear job loss. New research reveals how AI anxiety is driving turnover. See the new data.
Essential Resources for In-House Legal Teams: 2025 Year in Review
Explore Axiom's top 2025 legal resources on AI adoption, talent retention, budget transformation, regulatory insights for in-house legal teams, and more.
Continuous Volatility Is the New Normal: Building Corporate Legal Departments for Constant Disruption and Uncertainty
Posted by David McVeigh- North America
- Must Read
- Expertise
- Legal Department Management
- Work and Career
- Perspectives
- State of the Legal Industry
- Legal Technology
- United Kingdom
- Australia
- Hong Kong
- Singapore
- Artificial Intelligence
- General Counsel
- Central Europe
- Legal Operations
- Solutions
- Regulatory & Compliance
- Spotlight
- Data Privacy & Cybersecurity
- Technology
- Commercial & Contract Law
- Corporate Law
- Global
- Tech+Talent
- Axiom in the News
- Large Projects
- Finance
- Law Firms
- Featured Talent Spotlight
- GC Report
- Healthcare
- Cost Savings
- Intellectual Property
- Videos
- Capital Markets
- Diversified Financial Services
- Labor & Employment
- Secondments
- Budgeting Report
- Commercial Transaction
- Energy
- Investment Banking
- Regulatory Response
- Banking
- Construction
- Consulting
- Consumer Packaged Goods
- Financial Services
- Healthcare & Life Sciences
- In-House Report
- Industrial
- Legal Support Professionals
- Manufacturing
- Materials
- Mergers and Acquisitions
- Pharmaceuticals
- Retail
- Transportation
- Aerospace & Defense
- Automotive
- Business Services
- Consumer Services
- DGC Report
- Education
- Food And Beverage
- Hospitality
- Insurance
- Litigation
- Private Equity
- Professional Services
- Public Sector
- Real Estate
- Specialized Advice
- Telecom
- Utilities
- News
- Recruitment Solutions
Get more of our resources for legal professionals like you.

