Thought Leaders
Why Companies Should Follow a Values-Based Approach to AI Governance

In September 2025, for the first time, all the member states of the United Nations gathered to discuss international AI governance; many were represented again in February at Delhi’s AI Impact Summit. The event led to the launch of two new bodies centred on AI governance; but it was, at best, a symbolic success.
The UN’s new mechanisms were designed to secure consensus: they steer away from contested areas such as military uses of AI, and lack clear sources of funding and powers of enforcement. This should come as no surprise to experienced observers. Today’s UN lacks the ability to move quickly or ensure universal compliance with its decisions, making it a difficult forum in which to effect real change.
This fits a well-established pattern. Despite years of scattered attempts at building consensus on AI regulations, there have been no significant international agreements, creating a void in which individual countries and blocs have been forced to develop their own rules. Yet the effective governance of AI is crucial if we wish to see it adopted widely, trusted by the public, and used in ways that deliver lasting social and economic benefit
Make Do and Mend
For global companies building and operating AI systems, this lack of common, agreed governance mechanisms is problematic. They wish to deploy AI systems all over the world, but no two jurisdictions abide by the same set of rules. So they’re forced instead to create a generic governance framework around their system, then rebuild it from the ground up in every country in which they operate to make sure it complies with local laws and regulations. This approach creates an enormous amount of extra work, makes AI initiatives more costly and prone to delays, and weakens’ global firms’ ability to realise economies of scale and share effective tools with users everywhere.
There is, however, an alternative. For firms looking to streamline their approach, the best option may be to build an AI governance framework which accounts for common ethical principles across these different regions, ensuring that they meet high standards everywhere in terms of protecting individuals’ freedom, privacy, and security. This technique represents a powerful way for AI businesses to increase public trust in their technology, to bolster their customer base, and to harness AI’s potential benefits to society.
Six Key Values for AI Governance
For any organisation interested in adopting a values-based approach to AI governance, I would suggest using the six key values we follow: accountability, explainability, transparency, fairness, security, and contestability.
We chose these values because they cover all major areas of the AI system lifecycle and because they’ve already been codified in various international and national standards relating to AI, such as the International Organization for Standardization’s ISO/IEC 42001 and the Artificial Intelligence Playbook for the UK Government.
To begin at the top, accountability means knowing who is responsible for what at every stage of the AI lifecycle. Without clear ownership, vital controls can be omitted because no individual or team holds ultimate responsibility. Organisations should assign senior, named owners – such as their Chief AI Officer – to AI systems and key stages and use a risk-based governance model, applying the same scrutiny to third-party tools as to those developed in-house. This means understanding supplier terms, limitations, and liabilities just as well as they understand their own systems.
The Organisation for Economic Co-operation and Development (OECD) captures this well in its guidance on advancing accountability in AI, which recommends that organisations create “mechanisms to embed the AI risk-management process into broader organisational governance, fostering a culture of risk management both within organisations and across the entire AI value chain.”
Next is explainability. Organisations should be able to show how an AI system reaches a decision. That requires mechanisms to document and trace decision-making, alongside clear records of system design, training data, and decision processes. Taken together, this allows teams to understand the lineage of information from a system’s inception through to deployment.
Fairness focuses on ensuring that AI systems produce equitable outcomes and do not replicate or amplify existing biases. Without deliberate checks, systems can cause harm by delivering skewed results – a particular problem in high-impact areas such as recruitment, healthcare and criminal justice. To mitigate this, organisations should implement bias detection measures, review outputs regularly across relevant groups, and design governance frameworks that can accommodate local non-discrimination requirements. In practice, this means building systems to meet the highest legal standard they are likely to encounter, including obligations under laws such as the UK’s Equality Act 2010 and the EU’s Charter of Fundamental Rights.
Transparency is about bringing clarity to both users and regulators. People should understand when AI is being used, what role it plays in decision-making, and what data underpins it. A practical starting point is to standardise documentation across AI systems, supported by internal tools such as model cards: short documents provided with machine learning models that explain the context in which the models are intended to be used, details of the performance evaluation procedures, and other relevant information. Without transparency, users cannot contest unfair outcomes, regulators cannot intervene effectively, and harmful impacts may be swept under the carpet.
Security involves protecting AI systems from unauthorised access, manipulation, or unintended behaviour. If security is weak, AIs can put organisations, users and their data at risk, exposing them to financial and reputational harm. Organisations should define performance and accuracy thresholds, stress-test systems under realistic conditions, and incorporate red-team testing to identify vulnerabilities.
Finally, contestability ensures that people have a clear and accessible way to challenge or appeal AI-driven decisions. Without it, affected users have no recourse and problems may never be surfaced or resolved. Organisations should provide reporting channels at the point of use, assign senior owners to manage complaints, and ensure systems can be paused, reviewed, or updated where necessary.
What are the Benefits of a Values-Based Framework?
There are two powerful reasons for adopting this values-based approach to AI governance. First, because those that build and deploy AI systems have an ethical responsibility to the people and organisations affected by them; and, second, because this is a more effective way to realise AI’s promised benefits in practice.
Users of AI systems, both corporate and individual, place implicit trust in their creators not to misuse personal data or expose them to unnecessary risk. When organisations break that trust, it becomes very difficult for them to retain those users. Ultimately, unless people trust AI systems, and can see the clear benefits they deliver, they won’t go along with their introduction. This will cause more social and economic division, and we’ll miss out on many of the opportunities presented by this technology.
On the other hand, companies that apply a values-based framework everywhere – including in regions with more relaxed governance requirements – can demonstrate to customers, investors, and regulators that they are holding themselves to a higher standard than basic compliance demands. This builds trust, engagement and, ultimately, business success.
Strong AI governance is a value creator, not a compliance burden. It enables businesses to bring new products to market more quickly, reduce their risk exposure, and scale their solutions across multiple markets with confidence.
McKinsey’s ‘The state of AI’ report found that “a CEO’s oversight of AI governance… is one element most correlated with higher self-reported bottom-line impact from an organization’s gen AI use,” underlining the commercial benefits of such an approach. In that respect, building strong ethical frameworks into AI systems represents enlightened self-interest.
Beyond all of this, though, it is simply the right thing to do. We’ve built our global ethical AI policy around the same principle: that advanced technologies must serve people and society, not the other way around. This reflects the wider vision of Society 5.0: a human-centred model of innovation that seeks to combine economic progress with the resolution of social challenges.
If emerging technologies like AI are to foster a happier, more harmonious society, they must be built on strong ethical foundations. That starts with a focus not only on the standards organisations are required to meet, but also the standards they’d like to achieve.












