AI governance, from the perspective of an organisation aiming for regulatory compliance and risk reduction, involves establishing a structured framework to oversee the development, deployment, and monitoring of AI systems in line with legal, ethical, and operational standards. This includes defining clear roles and responsibilities, setting policies on data use, model development, and decision-making, and aligning practices with applicable regulations such as the EU AI Act or sector-specific guidelines. Governance ensures that AI is used responsibly across the organisation, with accountability mechanisms in place to address issues like bias, safety, and transparency.
— ChatGPT, May 2025

The International AI Governance Association (IAGA) is a Global Digital Foundation initiative. The IAGA exists to enable organisations to develop, learn, share and align best practice on global AI governance. In turn, this alignment facilitates global market access for AI products and services. The association brings together representatives from across the AI ecosystem. We do this by convening events and working groups with leading industry experts, AI assurance providers, regulators, policy makers, standards and certification bodies, and academic experts.

Our growing membership currently numbers over 300 professionals working in AI technology, international standards development, public policy, and universities. IAGA was previously the AI Assurance Club, and builds on the work of the Club since it was established in June 2022.

IAGA meetings montage

What is AI Governance?

Any business that develops or uses AI will encounter both compliance related risks and non-legal risks, such as repetitional harm. To identify and effectively manage these risks and opportunities, many organisations are already developing robust internal AI governance structures. This means organisations need to put in place the right internal processes and guidelines to ensure effective decision making about responsible AI development, deployment and use. There is a rapidly growing demand for AI governance professionals with the expertise and skills needed to support organisations in meeting their strategic priorities.

The production and distribution of AI systems is complex. AI systems are generally developed through the collaboration of many actors within a value chain rather than ‘in-house’ by a single entity. In January 2024 we proposed a multi-actor governance framework (MAGF) to support assurance information flow between actors in the AI value chain. We outline this approach in our MAGF white paper. We are delighted and proud that our efforts relating to transparency through the value chain, and the need for frameworks to support information sharing, have raised the importance of this issue.

Latest News