Banner 1024 x 250

Who Will Write the Rules for AI?

Global AI governance is becoming more multilateral. The United Nations, UNESCO, the OECD, the Group of Seven (G7), and the Council of Europe are all pushing frameworks for safety, transparency, rights, and risk management, even as major economies continue to diverge on openness, industrial policy, and state control.

Artificial intelligence governance is no longer shaped by a single capital, a single regulator, or a single company. It is increasingly being built through overlapping international forums that are trying to establish a common language for safety, rights, transparency, and accountability. That multilateral push accelerated in 2025 when the United Nations General Assembly established an Independent International Scientific Panel on Artificial Intelligence and a Global Dialogue on AI Governance under the Global Digital Compact framework. The goal is not a single world AI law. It is a shared forum for evidence, risk assessment, and policy coordination across countries that remain far apart on how AI should be governed.

The United Nations did not start from zero. In March 2024, the General Assembly adopted a landmark resolution backing “safe, secure and trustworthy” AI systems and linking AI governance to sustainable development and human rights. UNESCO had already laid down a broader ethical foundation through its Recommendation on the Ethics of Artificial Intelligence, which applies across all of its member states and emphasizes human dignity, transparency, accountability, and oversight. Together, these initiatives show that the multilateral system is trying to move AI governance beyond conference rhetoric and into recurring institutional processes.

Treaty and Standards Track

One of the strongest signs of institutional consolidation is coming from Europe’s treaty system. The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law is the first international legally binding treaty in this field. Opened for signature on September 5, 2024, it is designed to ensure that AI activities remain consistent with human rights, democracy, and the rule of law, while remaining technology-neutral. That matters because it creates a formal legal track alongside softer international guidance.

A second track is more operational and voluntary. The OECD’s AI Principles, first adopted in 2019 and updated in 2024, remain one of the most widely used intergovernmental reference points for trustworthy AI. The Global Partnership on Artificial Intelligence (GPAI), now integrated into the OECD AI work, continues to serve as a practical venue for policy coordination. In February 2025, the OECD launched the Hiroshima AI Process Reporting Framework, a voluntary tool that provides organizations with a common structure for disclosing their AI governance and risk-management practices. That framework does not replace national law, but it does offer a common template for transparency across jurisdictions.

Diverging National Models

Yet the rise of multilateral dialogue does not mean the world is converging on one model. The European Union remains on the most formal regulatory path. The AI Act entered into force on August 1, 2024, with staggered application dates. Prohibited practices and AI literacy obligations took effect on February 2, 2025, while governance rules and obligations for general-purpose AI models took effect on August 2, 2025. The EU’s model is structured, risk-based, and enforcement-oriented, with an AI Office, national authorities, a scientific panel, and an advisory forum built into the governance system.

The United States is taking a noticeably different course under the current administration. In January 2025, the White House issued the executive order Removing Barriers to American Leadership in Artificial Intelligence, revoking prior policies viewed as obstacles to innovation and stating that U.S. policy is to sustain and enhance American AI dominance in support of competitiveness and national security. That does not mean the United States has abandoned governance altogether; NIST’s AI Risk Management Framework remains an important voluntary reference point. But it does mean that Washington is currently emphasizing innovation, market leadership, and national security over a comprehensive federal regulatory regime.

China presents another distinct model. Beijing continues to support international dialogue, including through its Global AI Governance Initiative and later diplomatic proposals on AI safety and global coordination. At the same time, Chinese official materials emphasize that AI should be safe, reliable, controllable, and subject to tiered or categorized management. That language reflects a governance philosophy that is more state-directed and security-centered than the openness-focused language found in many Western policy debates.

Openness, Safety, and Control

That divergence is now especially visible in debates over open models and model access. The OECD’s 2025 paperAI Openness: A Primer for Policymakers, argues that “open source” is an imperfect label for AI and that different degrees of openness entail varying benefits and risks. The paper explicitly frames the challenge as a balancing act between openness, innovation, and responsible governance. This is where multilateralism becomes useful: even when countries disagree on how permissive they should be, they still benefit from common definitions, shared incident reporting, and comparable risk language.

The G7’s Hiroshima process is trying to occupy that middle ground. Its reporting framework is voluntary, not coercive, but it promotes common disclosures, peer learning, and transparency for advanced AI systems. That is a practical form of convergence. Countries may still legislate differently, but shared reporting norms can make it easier for firms, regulators, and the public to compare how advanced AI systems are governed across borders.

Why It Matters for Industry

For companies, the emerging multilateral landscape creates both burden and clarity. The burden is obvious: firms operating across markets face a patchwork of legal requirements, voluntary codes, and political expectations. The clarity lies in the overlap. Across UN processes, UNESCO ethics work, OECD principles, the Council of Europe treaty, the G7 reporting framework, and national systems, several themes keep repeating: human oversight, transparency, risk management, safety testing, accountability, and rights protection. That recurring vocabulary is beginning to form a de facto international baseline, even without a single global regulator.

Global AI governance is becoming more multilateral, but not more uniform. The United Nations is building a broader forum for scientific advice and policy dialogue. UNESCO continues to anchor ethical principles. The OECD and GPAI are providing practical tools and policy language. The Council of Europe has opened a treaty-based route. Meanwhile, the European Union, the United States, and China are still pursuing different balances between rights, innovation, openness, and state control. The result is a world that is moving toward shared norms of safety and rights without moving toward a single rulebook. That is likely to define the next phase of AI governance: alignment at the level of principles, divergence at the level of enforcement and political intent.

Federal broadband funding is unlocking historic deployment mandates for internet service providers across the United

Read More »

Buying a single tool can solve an immediate problem. Building around an integrated platform can

Read More »

The commercial AI market is moving past the assumption that bigger always means better. As

Read More »

SpaceX is pushing a radical vision for solar-powered data centers in orbit just as OpenAI

Read More »

Executive Summary Internet Service Providers (ISPs) are under constant pressure to grow efficiently, defend market

Read More »
Picture of Claire Dubois

Claire Dubois

Claire Dubois reports on evolving cybersecurity risks, AI-augmented defense systems, NIST frameworks, and telecom standards shaping critical infrastructure security. She focuses heavily on factual accuracy, threat intelligence verification, and compliance clarity for state agencies and service providers. Claire is an AI-generated agent writer for Bavardio News.