
As AI becomes ever more ubiquitous and powerful, regulators are rushing to manage and mitigate this potentially high-risk technology. Typically, this means relying on privacy laws, including the EU and UK GDPR, where the data processing includes personal data, but consumer protection legislation and antitrust laws are also being used to put guardrails around AI.
Examples of regulatory action in 2024 include:
- Some regulators, including several EU data protection authorities (DPAs), are actively investigating AI companies for alleged breaches of the EU GDPR. The Italian DPA issued OpenAI with a formal notice for violations of provisions of the EU GDPR, having originally banned the use of ChatGPT in Italy until OpenAI complied with a set of interim measures.
- In the UK, the Information Commissioner’s Office (ICO) has been investigating Snap’s ‘My AI’ chatbot but, in July 2024, agreed to close its investigation on the basis that Snap appropriately remedied its alleged breaches of the UK GDPR. However, the ICO noted that its investigation had led to Snap conducting a more thorough review of potential risks posed by the chatbot.
- Some regulators, such as the South Korean Personal Information Protection Commission (PIPC) and the UK’s ICO, are aiming to mitigate AI risks through updating existing guidance and regulatory innovation. The ICO launched a consultation series in the first half of 2024 on the intersection of data protection and generative AI, focused on topics such as purpose limitation in the generative AI lifecycle, the accuracy of training data and model outputs and the allocation of controllership. We expect to see updates to ICO guidance in 2025 as a result. South Korea’s PIPC has emphasized regulatory sandboxes and introduced a ‘Prior Adequacy Review Mechanism,’ where it will work together with startups developing innovative AI models or services to ensure that sufficient privacy and data protection measures are embedded in the design of AI systems.
Data privacy regulators across the world are focused on AI. Businesses need to ensure that they are developing and deploying AI systems compliantly including, where appropriate, engaging closely with regulators as they do so.
Giles Pratt, Partner
- In the US, the Federal Trade Commission (FTC) has increasingly brought investigations and enforcement actions related to AI. In July 2023, the FTC issued a civil investigative demand (CID) to OpenAI covering a range of topics, including public disclosures about AI products, the data it used to train its models and measures taken to mitigate potential risks including false statements about individuals. This follows a settlement with Rite Aide related to the company’s use of AI-based facial recognition technology. In addition, the agency recently announced a sweep of enforcement actions concerning AI-related misrepresentations. More CIDs can be expected to be issued in AI investigations, given the FTC’s November 2023 approval of a resolution making it easier for officials to issue CIDs.
The Italian DPA’s bold stance against OpenAI reflects the global shift toward stricter AI regulation. AI growth must be matched by strong commitments to data protection and regulatory engagement.
Davide Borelli, Counsel
Looking ahead to 2025, we expect privacy regulators to continue their focus on AI.
In the US, the FTC should be expected to ramp up its rigorous scrutiny of AI products and businesses. The FTC has publicly stated its interest in enforcement relating to advertising claims, AI product misuse to perpetuate fraud and scams, competition concerns and copyright/IP concerns with regards to training AI models and data privacy. The FTC’s interest in investigating competition concerns has already resulted in the issuance of orders to five companies requiring them to provide information about recent investments and partnerships involving generative AI companies and cloud service providers. The agency has also announced an investigation into ‘surveillance pricing,’ the practice of categorizing individuals using their personal information to set pricing targets for goods or services using AI technology.
As many companies increasingly become AI companies, they will need to ensure that they are developing and deploying AI systems safely and effectively.
Joseph Mason, Associate
In the UK and EU, we expect ongoing focus on AI products and services, particularly those deemed to be higher risk, and companies should expect a robust approach from regulators if they suspect infringements of EU or UK GDPR. It remains to be seen how plans to reform UK data laws announced by the newly-elected UK government will impact data protection regulation as it relates to AI.
Working out how to approach AI enforcement is fast becoming a global priority, reflecting a collective commitment to harnessing the power of AI responsibly.
Rachael Annear, Partner
As we look to 2025 and beyond, companies should brace for an intensified regulatory focus on data enforcement, particularly concerning the development and deployment of AI systems. Regulators have shown a readiness to take strong actions against suspected privacy law violations, including halting the launch of AI solutions or pausing ongoing AI development.
However, these regulatory measures also serve as valuable guidance for safe and effective AI deployment. To navigate this landscape, companies should:
- Ensure they maintain comprehensive documentation, including detailed data protection impact assessments for high-risk processing.
- Stay informed about the latest guidance from DPAs, such as the UK’s ICO and the EU’s EDPB.
- Prioritize the integration of privacy protections into their AI systems from the outset of the development process.
Beyond AI, changes to the EU GDPR’s OSS mechanism are likely to facilitate more enforcement of cross-border processing within the EU. We also anticipate an uptick in global enforcement actions related to alleged breaches of privacy, cybersecurity, and consumer protection laws.
Our team






2025 Data law trends