2025 Data law trends
1. AI governance takes center stage
By Rachael Annear, Zofia Aszendorf, Georgina Bayly, Richard Bird, Theresa Ehlen, Beth George, Adam Gillert, Catherine Greenwood-Smith, Giles Pratt, Lutz Riede
IN BRIEF
With regulatory pressures, changing expectations from shareholders and customers, and the increasing risk of litigation, it’s clear that addressing AI governance is more important than ever.
As a result, many organizations today are feeling the heat to show they have the right governance structures and decision-making processes in place for their use of AI – or for deciding not to use it at all.
In this chapter, we’ll dive into why a proactive AI governance framework is essential. It’s not just about ticking boxes; it’s about taking control of AI’s potential while managing its risks. We’ll explore the key pressures you’re facing and highlight the foundational elements that can lead to successful AI governance.
title
Increasing pressures to develop AI governance frameworks
Pressures to develop AI governance frameworks include:
AI-specific regulatory regimes
These regimes are taking more discernible shape across the globe, with AI specific regulation now in force across the EU and China, and planned at national level (with published draft texts) in Brazil, Canada, South Korea, Thailand and Vietnam. New (albeit narrow) AI-specific regulation was introduced to protect the integrity of India’s recent elections and is also anticipated in the UK.
A proliferation of guidance as to the application of existing regulatory regimes to the use of AI
The UK, US and other jurisdictions (including Australia, Hong Kong, India, Japan, Russia, Saudia Arabia, Singapore, South Korea and Turkey) have implemented policies aimed at streamlining AI regulation at the national level. These fall short of AI-specific laws and instead direct established regulators to apply existing regimes to the use of AI. Non-regulatory government bodies are also being vocal in this space – for example, the US Department of Justice, primarily a law enforcement agency, has spoken about its expectations that corporate compliance programs are effective at mitigating AI-related risks.
The emergence of global standards for AI governance, such as ISO/IEC JTC 1/SC 42
Customers, distributors and other contractual counterparties may start expecting compliance with these types of standards as a ‘badging’ of an organization’s AI maturity.
Increased scrutiny of company reporting with respect to use of AI from shareholders
Companies in the US are already facing scrutiny from shareholders who view them as being insufficiently transparent about their use of AI. We have seen a trend of shareholder petitions being filed at the US Securities and Exchange Commission aimed at eliciting further detail relating to a company’s AI strategy. AI risks and opportunities are becoming a common theme of listed company reports; see infographic below.
Increasing focus from NGOs on AI and the potential risks it poses
For example, Amnesty International published a report titled The State of the World’s Human Rights in April 2024, which looked at human rights concerns from 2023. This report highlighted AI as a potential threat to human rights citing use cases such as state deployment of facial recognition software to aid policing of mass events, including protests, as well as use of biometrics and algorithmic decision-making in migration and border enforcement. The Austrian privacy advocacy group ‘noyb’ has been vocal in relation to the privacy implications of AI technologies.
Increasing risk of AI litigation and regulatory enforcement
Companies are feeling the pressure to get AI governance right not only from regulators, but also from the markets, the emergence of global standards for AI governance and third-party actors such as NGOs.
Giles Pratt, Partner
Companies need a framework to ensure compliance and respond to regulatory scrutiny and allow them to make the most of AI while navigating the risks associated with its use.
Data-related matters will be a core component of this framework. The EU AI Act, which is the world’s first comprehensive AI-specific legislation, imposes numerous governance and documentation-related obligations, including specific data governance obligations on providers of high-risk AI systems. Similarly, data protection regulators globally have not shied away from enforcement activity relating to the use of personal data in connection with AI systems (we have seen activity in this space from data protection regulators in the UK, Ireland, Italy, the Netherlands, Hong Kong and elsewhere. The US Federal Trade Commission is also active in this space as part of its consumer protection remit) and are also proactively consulting on the application of data protection laws to AI.
Emerging AI litigation and regulatory enforcement themes
As AI becomes increasingly advanced, companies face a growing risk of litigation or regulatory investigations concerning AI use or development. Governments and regulators are heightening their focus on both the opportunities and risks posed by AI.
While many new regimes specifically regulating AI are yet to be enacted and/or implemented, AI-related litigation and investigations are being brought under existing regimes governing areas such as data protection and privacy, equality and anti-discrimination, intellectual property (IP), product liability and consumer protection and misrepresentation.
While lawsuits and investigations concerning AI are currently based on existing regimes, we expect to see a real influx of new cases once new legislation specifically targeting AI comes into effect.
Catherine Greenwood-Smith, Partner
So far, AI litigation remains at a relatively nascent stage. We anticipate a surge in AI litigation with the rapid advancement of AI systems and emergence of new regulatory regimes and potential for diverging approaches across jurisdictions.
In terms of the current landscape:
- The US is leading the way with a number of class actions. Allegations range from unfair and discriminatory outcomes resulting from algorithmic decision making, to breach of privacy in connection with the training of AI models. Other jurisdictions will likely follow suit.
- Outside the US, early cases have been brought primarily against states for their use of AI, eg in relation to alleged biases and invasion of privacy resulting from use of facial recognition software. However, the focus appears to be shifting to companies who develop and/or deploy AI.
- Globally there is already big-ticket IP litigation, where claimants allege their IP is being used by defendants without consent to train their own AI models, or that outputs from defendants’ AI models infringe IP.
Mass claims alleging harms caused by AI are already being brought in the US, but we expect to see a dramatic increase in AI related mass claims both in the US and elsewhere as the development and use of AI rapidly expands.
Georgina Bayly, Associate
We are also seeing regulators taking a more hands-on approach to governing AI, even where specific AI regulations are yet to take effect, for example:
- Data protection authorities are particularly active in the AI space, showing a readiness to issue warnings, launch investigations and bring enforcement action against companies where their development and/or use of AI is suspected to be in breach of data protection regimes.
- Financial regulators, particularly in the US, are clamping down on so called ‘AI washing,’ where companies overstate their AI capabilities to investors and consumers. Several warnings and certain enforcement actions have been issued in recent months (we anticipate other regulators will follow suit).
- Competition authorities are showing particular interest in tech companies’ position in the AI development market, with investigations into partnerships between large tech firms and AI start-ups launched in both the US and Europe.
- Consumer protection regulators in the US are closely scrutinizing disclosures to users, ensuring that users’ understanding and expectations match AI tools’ capabilities. These agencies are also using consumer protection standards in their attempts to require companies to recognize new or evolving rights to online content that may be used for training AI systems.
Regulators have already set their sights on AI, particularly in areas such as data protection and financial regulation in relation to AI washing. Companies should review their governance systems to ensure they stand up to scrutiny and be wary of new requirements coming down the line.
Zofia Aszendorf, Senior Associate
Key cornerstones for successful AI governance
The right governance around AI is important both to achieving organic growth in this area and to attracting investment (including, for early-stage companies, in the context of investor diligence). Importantly, AI governance shouldn’t be seen as being limited to mitigating legal risk – done well it can also help to maximize the value of a company’s AI investment, setting up future growth.
A successful AI governance framework will help mitigate AI-related risk and set up future growth.
Beth George, Partner
A good example in the data space is the importance of appropriate governance processes in ensuring that proprietary datasets are appropriately ringfenced from use by third parties in the AI value chain (through a combination of technical measures, processes and contracting frameworks).
Effective AI governance should not just be seen through the lens of regulatory necessity but also as part of the strategic imperative that builds trust and ensures integrity in decision making.
Rachael Annear, Partner
Regulatory guidance presents degrees of prescriptiveness around governance structures, including around topics such as the involvement of senior management and monitoring and reporting lines. Getting governance for AI right requires considering: (i) what the governance structures should look like; (ii) who should be staffed within them; and (iii) what those individuals should be responsible for.
Governance structures – key considerations
- Within corporations that are looking to add AI to their existing offerings, we are typically seeing a single person with general oversight – an ‘AI leader’ – supported by a cross functional ‘AI steerco’ of senior leaders, including legal and compliance professionals.
- Consider whether the AI steerco and AI leader should report to the board.
- Regular reporting assists the board to carry out an effective task of holding the AI steerco and AI leader to account.
- Consider whether links should be made to any other committees or steercos – we are seeing trends of cyber, risk and audit committees being involved in AI governance.
- Corporate groups need to consider what decisions can be made at divisional/subsidiary level and what decisions need to be centralized.
Staffing of the governance structure
- The people in the governance structure need to be appropriately qualified and ideally will come from a range of disciplines – such as engineers, developers, product specialists and lawyers.
- The EU AI Act contains a specific requirement on providers and deployers of AI systems to ensure AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf. Other guidance around the world also emphasizes the need for adequate training of personnel overseeing AI systems.
Terms of reference for the AI Leader and Steerco
These can broadly be categorized into three areas:
- Legal and compliance: this remit is broader than just AI-specific regulation. It needs to cover other legal obligations, for example antitrust regimes, sector-specific regulation and data regimes. It also applies more broadly than just in relation to the business’ external roll out of AI systems – eg there is a significant interface between the use of AI for internal purposes and labor law compliance, including potential works council obligations. A particularly knotty piece of the legal and compliance aspects of AI governance is determining how to approach any product liability considerations, which will depend on the business’s role in the AI value chain.
- AI Product Development: this will include considering the development of AI tools in line with legal and compliance obligations, including ‘privacy by design’ requirements.
- AI Deployment: key features of deployment should include periodic (perhaps annual) systemic risk assessments and audits of the deployment of AI tools, as well as clear processes for sign-off of new use cases for developed tools.
Organizations may also want to task the AI leader, AI steerco and the board with considering the company’s AI-related reputation and appropriate external-facing communications – ie what the business wants to be saying about AI in public and how it wants to position itself with respect to AI. Businesses that can articulate this clearly will gain an advantage although they will also need to be mindful of the increased scrutiny on AI washing (see above).
Underpinning these three cornerstones of a business’ AI governance structure needs to be a degree of flexibility and adaptability, in recognition of the fact that both the technology and the law in this space is fast evolving.
AI governance frameworks should be assessed for structure, staffing and terms of reference – does the business have the right people, in the right place, doing the right thing – but it’s equally important that they can adapt to the fast-evolving technology and law in this space.
Lutz Riede, Partner
Looking Ahead
As the litigation and regulatory landscape continues to change, it’s crucial for businesses to keep a close eye on these developments. Regularly evaluating the effectiveness of your governance systems will be key to mitigating AI-litigation risks.
If your business is developing or deploying AI, now’s the time to make sure you have the right governance structures in place. This means ensuring you have the right staffing, resources, and clear terms of reference. But don’t stop there. Building in flexibility will help you proactively adapt to future needs, positioning you for future success as the landscape evolves.
2025 Data law trends
- 01. AI governance takes center stage
- 02. International data transfers are under the spotlight
- 03. A new wave of cyber threats is here
- 04. New global regulations are changing our digital operations
- 05. Tougher enforcement is reshaping data and privacy compliance
- 06. US State consumer privacy laws are expanding
- 07. Asia’s privacy laws are maturing
- 08. New EU data access regulations are shaping the future