How To Navigate Data Privacy Laws in an AI-Driven World
July 2024
By
Wendy Thurm
Artificial Intelligence (AI) systems are evolving at breakneck speed. Technology companies are rolling out new features and capabilities on a near-daily basis. Governmental authorities are working quickly, too, to reign in AI’s excesses. Local, state, national, and transnational governments are enacting laws and regulations to protect their citizens from all the ways AI can exacerbate bias and discrimination, undercut transparency, and undermine data privacy.
This article will discuss how AI threatens to overrun data privacy laws and what steps in-house lawyers should take to prevent that from happening.
Understanding the reach of AI
Gordon Wade, lead privacy regulatory counsel at TikTok, recently conducted a CLE webinar for Axiom on the intersection of AI and data privacy laws.
Wade began by defining AI as “the simulation of human intelligence processes by machines, particularly computer systems.” These processes, Wade explained, include learning, reasoning, and self-correction. In the realm of data processing and analysis, AI plays a pivotal role in augmenting human capabilities and unlocking insights from vast datasets.
Generally speaking, lawyers are more familiar with AI systems that perform natural language processing, like ChatGPT. This technology allows machines to understand, interpret, and generate human language and human reasoning in order to carry out certain tasks.
People who work with large datasets are likely familiar with AI systems that perform data processing at superfast speeds and can identify patterns, trends, and insights in data that humans would likely overlook.
Companies use AI in a variety of ways. When companies use AI to collect and use information about their clients and customers, they need to understand and comply with the data privacy laws that govern the jurisdictions where they operate.
At the heart of all data privacy laws is the obligation of companies that collect personal data to maintain the confidentiality of that data and give consumers the opportunity to control who has access to that data.
The growing list of data privacy laws around the world
The European Union kicked off efforts to regulate how companies collect and use customer data in 1995 when it adopted the Data Protection Directive, which set standards for data protection across member states. More than 20 years later, the EU amped up data privacy protections with the General Data Protection Regulation (GDPR), which took effect in 2018.
California was the first state in the U.S. to enact a law that mandates companies protect the privacy of consumers’ personal data. That bill – the California Consumer Privacy Act (CCPA) – went into effect in 2018. Colorado, Connecticut, Utah, and Virginia enacted similar laws that went into effect in 2023. Florida, Montana, Oregon, and Texas will see new data privacy laws take effect in 2024. Delaware, Iowa, New Jersey, and Tennessee passed laws that give companies until 2025 to comply. Indiana’s data privacy law takes effect in 2026.
Outside of Europe and the U.S., some particular privacy regulations of note are the ones put in place in the United Arab Emirates (UAE), Singapore, and Canada.
In the UAE, the Dubai International Financial Centre’s Regulation 10 on Processing Personal Data establishes boundaries for deploying AI systems that process personal data. Singapore’s Model AI Governance Framework for Generative AI seeks to enhance safety, accountability, transparency, and security when generative AI is used. Canada’s Artificial Intelligence and Data Act will go into effect in 2025.
How AI technologies interact with data privacy laws
Let’s look at three examples of how data privacy laws may impact a company’s use of AI technology:
Data processing: AI systems rely on large amounts of data to train and operate effectively. Data privacy laws govern how this data can be collected, stored, processed, shared, and transferred. These laws also impose restrictions on data usage and require companies to put safeguards in place against unauthorized access.
Accountability and governance: Data privacy laws impose accountability requirements when using AI to process personal information. This includes implementing appropriate security measures and conducting privacy risk assessments.
Transparency and ease of explanation: Data privacy laws require transparency and explainability in AI systems that process personal data. This means that individuals have the right to know how their data is being used and to receive explanations for automated decisions that affect them.
How AI threatens data privacy
Because AI gives companies the ability to analyze large data collections quickly and effectively, companies seek out more and more data about their customers. This kind of ubiquitous data collection leads to massive private repositories of personal data.
Moreover, AI systems operate like a black box. A limited set of people know what data is fed into the AI system and even fewer understand how the AI system identifies patterns and makes predictions of future behavior.
This raises all sorts of questions: Is this corporate data collection excessive? Do your customers understand what you’re collecting and how you’re using their data? Does the massive data collection erode your customers’ trust in your company? Does it lead to intrusive surveillance of a corporation’s customers? Does it lead a company to make discriminatory or biased decisions?
In order to answer these questions, a company must be clear internally what AI systems they have developed or purchased from an AI vendor, how they are deploying those systems throughout the company, what data is collected, how the data is analyzed, and how the data is used to make decisions. It’s imperative that companies understand how to identify and rectify biases, errors, and unethical practices in the AI systems they use. Without this level of transparency, companies cannot be accountable to their customers.
Companies must be internally transparent about AI before they can be externally transparent about AI to their customers. If company engineers don’t know how the AI systems they built — or the third-party AI systems they deployed — operate, how will the company explain this in plain language to its customers? Without that basic information, customers will not be able to make intelligent decisions about what data they will permit the company to collect.
Ubiquitous data collection raises another concern: Are these large datasets more likely to lead to security breaches? Gordon emphasized in his CLE presentation that AI systems are not risk-free when it comes to data security. These systems are targets for cyber attackers and cyber criminals, and that puts consumers’ data at risk of unauthorized access or theft. Companies must conduct vulnerability assessments and security audits on a constant, repeatable basis to find and plug security holes.
Get Support from Axiom's Lawyers with Data Privacy & AI Experience
- 4,000+ lawyers with Fortune 500 experience
- Access top lawyers at up to 50% lower rates than national law firms
How to align AI systems with ethics and data privacy protections
The key to an effective privacy strategy in an AI world is to incorporate privacy protections in the design and deployment of an AI system from the very beginning. By integrating privacy considerations from the outset, organizations can align AI practices with data protection laws and legal privacy requirements while fostering trust and compliance.
The goal should be to place individual rights over data collection.
- Companies must provide transparent information on how data is collected, processed, and used and create user-friendly interfaces so individuals can knowingly consent to how their data is used.
- Companies must take steps to mitigate algorithmic bias and discrimination.
- Companies must implement robust data protection measures.
The “privacy by design” approach also puts companies in a better position to identify privacy risks and address those risks before product development is completed. Engineers and product developers must ask themselves: What data would we like to collect? What data is absolutely necessary to collect? What will we do with the data once we collect it? Will that process be transparent? Will it lead to unfair decisions?
To be successful with this approach, company lawyers must stay up-to-date on the ever-changing privacy landscape worldwide and then regularly train AI developers, engineers, and product managers on data privacy requirements.
With AI-related products that have already been developed and deployed, companies must regularly conduct data privacy impact assessments. A DPIA is a process designed to identify risks arising out of processing personal data and to minimize the risks as soon as possible.
What are the best ways to minimize risk? Companies are starting to employ privacy-enhancing techniques to counter the effects of AI — techniques like differential privacy, homomorphic encryption, and secure enclave technology introduce noise and randomness to datasets, which in turn enhances data privacy by breaking the link between data and the individual who provided it.
Companies should favor more AI regulation
The private sector tends to dislike government regulation but, Gordon argued, regulation to control the excesses of AI makes good sense:
- Regulation brings legal clarity to a high-risk activity. Regulations define responsibilities and identify which parties have potential liability when things go wrong.
- Regulations reduce overall risks associated with AI, which protects small and medium-sized companies that don’t have the resources to take on the limitless risks associated with AI.
- Regulations define and enforce strict data protection measures — i.e., processes designed to mitigate against unauthorized access to or misuse of sensitive information.
- Regulations create accountability across the AI ecosystem and help build trust between companies that use AI and their customers.
- Regulations provide an opportunity to create ethical guidelines for decision-making on AI systems not covered by existing regulations.
- Regulations create uniform security standards for making AI systems cyber-resilient.
- Regulations mitigate the risk that AI will be used in a discriminatory way via biased algorithms.
There are downsides to regulation, too, Gordon acknowledged:
- Regulations can constrain innovation.
- Regulations bring a blunt instrument to a complex technological process.
- Regulations increase compliance costs.
Best practices for data privacy when using AI
Gordon concluded his CLE presentation with a list of best practices companies should employ when deploying AI:
- Implement privacy-by-design: Integrate data privacy principles from the outset of system design and development.
- Conduct privacy impact assessments: Regularly assess the potential privacy risks associated with AI systems.
- Enforce data minimization: Collect and process only the necessary personal data required for specific purposes.
- Prioritize transparency and consent: Provide clear information and obtain individuals' explicit consent for data collection, processing, sharing, and analyzing.
- Foster accountability and oversight: Establish governance frameworks and monitoring mechanisms for AI systems.
- Implement robust security measures: Employ encryption technologies and access controls.
💡 Need an attorney with data privacy and AI experience? Axiom talent can help your legal team do more FOR less.
Posted by
Wendy Thurm
Wendy Thurm is a writer, editor, and legal analyst. She practiced law for 18 years, primarily as a partner with a San Francisco litigation boutique firm.
Related Content
Serving as Product Counsel for a Global Technology Leader
How working for Axiom has enabled lawyer Ayako Christopher to deepen her data-privacy expertise and reach a career highlight.
IR35: End of Contractor and Gig Economy Lawyers in the UK
IR35, the UK’s new tax law governing freelancers, will change how self-employed lawyers and their clients work together. The legal industry must start preparing now.
Law is a Team Sport in the Digital Age
Mark Cohen discusses how legal teams must be truly multidisciplinary in order to work cross-functionally to solve complex business challenges and seize opportunities.