
With regulatory pressures, changing expectations from shareholders and customers, and the increasing risk of litigation, it’s clear that addressing AI governance is more important than ever.
As a result, many organizations today are feeling the heat to show they have the right governance structures and decision-making processes in place for their use of AI – or for deciding not to use it at all. In this chapter, we’ll dive into why a proactive AI governance framework is essential.
It’s not just about ticking boxes; it’s about taking control of AI’s potential while managing its risks.
We’ll explore the key pressures you’re facing and highlight the foundational elements that can lead to successful AI governance.
Pressures to develop AI governance frameworks include:
AI-specific regulatory regimes
These regimes are taking more discernible shape across the globe, with AI specific regulation now in force across the EU and China, and planned at national level (with published draft texts) in Brazil, Canada, South Korea, Thailand and Vietnam. New (albeit narrow) AI-specific regulation was introduced to protect the integrity of India’s recent elections and is also anticipated in the UK.
A proliferation of guidance as to the application of existing regulatory regimes to the use of AI
The UK, US and other jurisdictions (including Australia, Hong Kong, India, Japan, Russia, Saudia Arabia, Singapore, South Korea and Turkey) have implemented policies aimed at streamlining AI regulation at the national level. These fall short of AI-specific laws and instead direct established regulators to apply existing regimes to the use of AI. Non-regulatory government bodies are also being vocal in this space – for example, the US Department of Justice, primarily a law enforcement agency, has spoken about its expectations that corporate compliance programs are effective at mitigating AI-related risks.
The emergence of global standards for AI governance, such as ISO/IEC JTC 1/SC 42
Customers, distributors and other contractual counterparties may start expecting compliance with these types of standards as a ‘badging’ of an organization’s AI maturity.
Increased scrutiny of company reporting with respect to use of AI from shareholders
Companies in the US are already facing scrutiny from shareholders who view them as being insufficiently transparent about their use of AI. We have seen a trend of shareholder petitions being filed at the US Securities and Exchange Commission aimed at eliciting further detail relating to a company’s AI strategy. AI risks and opportunities are becoming a common theme of listed company reports; see infographic below.
Increasing focus from NGOs on AI and the potential risks it poses
For example, Amnesty International published a report titled The State of the World’s Human Rights in April 2024, which looked at human rights concerns from 2023. This report highlighted AI as a potential threat to human rights citing use cases such as state deployment of facial recognition software to aid policing of mass events, including protests, as well as use of biometrics and algorithmic decision-making in migration and border enforcement. The Austrian privacy advocacy group ‘noyb’ has been vocal in relation to the privacy implications of AI technologies.
Increasing risk of AI litigation and regulatory enforcement
Companies are feeling the pressure to get AI governance right not only from regulators, but also from the markets, the emergence of global standards for AI governance and third-party actors such as NGOs.
Giles Pratt, Partner
Companies need a framework to ensure compliance and respond to regulatory scrutiny and allow them to make the most of AI while navigating the risks associated with its use.
Data-related matters will be a core component of this framework. The EU AI Act, which is the world’s first comprehensive AI-specific legislation, imposes numerous governance and documentation-related obligations, including specific data governance obligations on providers of high-risk AI systems. Similarly, data protection regulators globally have not shied away from enforcement activity relating to the use of personal data in connection with AI systems (we have seen activity in this space from data protection regulators in the UK, Ireland, Italy, the Netherlands, Hong Kong and elsewhere. The US Federal Trade Commission is also active in this space as part of its consumer protection remit) and are also proactively consulting on the application of data protection laws to AI.
As the litigation and regulatory landscape continues to change, it’s crucial for businesses to keep a close eye on these developments. Regularly evaluating the effectiveness of your governance systems will be key to mitigating AI-litigation risks.
If your business is developing or deploying AI, now’s the time to make sure you have the right governance structures in place. This means ensuring you have the right staffing, resources, and clear terms of reference. But don’t stop there. Building in flexibility will help you proactively adapt to future needs, positioning you for future success as the landscape evolves.
Our team





2025 Data law trends