Closing the Gaps: Standardizing Responsible AI Amidst a Fragmented Global Governance Landscape

May, 2025

Introduction

The global AI governance landscape is fragmented, with over 1,000 proposed and enacted policies spanning 69 countries, resulting in a complex web of inconsistent and often reactive regulations. Laws governing AI are rarely harmonized across borders and are deeply entangled with broader data protection, cybersecurity, and digital governance policies. This legal complexity makes it difficult for enterprises, especially those operating in sensitive sectors like telecom, healthcare, and government services, to implement AI solutions responsibly. In healthcare, AI tools risk exacerbating diagnostic inequities without fairness testing; in public services, opaque algorithms can reinforce systemic exclusion; and in telecom, restrictions around cross-border data sharing introduce significant governance challenges.

This was the focal point of a three-day workshop held in India, organized by the International Telecommunication Union (ITU) in collaboration with the National Institute of Communications Finance (NICF) and representatives from the Bay of Bengal Initiative for Multi-Sectoral Technical and Economic Cooperation (BIMSTEC) nations, including Nepal, Bhutan, Maldives, Sri Lanka, Bangladesh, and India. While the primary emphasis was on AI standardization for the telecom and information and communication technology sectors, the conversations extended to broader concerns of ethical implementation, operational accountability, and cross-border governance challenges.

A recurring theme was the ambiguity surrounding data ownership and responsibility. For instance, an AI system may provide misleading information, as happened when Air Canada’s chatbot incorrectly promised a refund and triggered a legal dispute. In such cases, who bears responsibility? Is it the developer who built the model, the organization that deployed it, or the source of the training data? While the airline was held accountable in this case, liability in similar scenarios can shift depending on jurisdiction, contractual arrangements, and the specifics of the AI life cycle. Without clear attribution, enterprises are left exposed. This challenge grows sharper with the increasing deployment of more autonomous systems such as generative and agentic AI, which introduce new layers of unpredictability and decision-making autonomy.

The Cost of Chaos: Burdens on Enterprises

Enterprises operating within a fragmented AI regulatory landscape face three major operational challenges:

    1. Rising compliance overheads: Organizations must create specialized internal teams to track, interpret, and comply with varying national and sector-specific regulations. For example, a multinational insurance provider operating in Europe and Asia must maintain separate compliance and legal units per region to handle differing AI and data governance requirements.
    2. Increased operational costs: Regulatory fragmentation often leads to redundant audits, multiple privacy protocols, and overlapping legal obligations.
    3. Delayed time to market: Legal ambiguity introduces hesitation and inconsistency in how and when AI systems, particularly newer models, can be deployed, delaying AI implementations.

Call to Action for Enterprises and Governments

As the global regulatory environment for AI evolves, enterprises and governments play a pivotal role in shaping a future where innovation is balanced with accountability. With a fragmented global landscape ranging from mature legal ecosystems to regulatory voids, responsible AI governance demands strategic foresight, operational discipline, and international collaboration.

The Enterprise Imperative

Enterprises must navigate contrasting AI regulatory frameworks, where regions like the EU enforce rigorous standards while others, such as Nepal or parts of Africa, remain without formal governance. Responsible AI adoption requires organizations to design governance strategies that are adaptable across jurisdictions and sensitive to emerging risks. To future-proof their AI deployment, enterprises must anchor their efforts around three priorities:

    • Institutionalize AI oversight functions: Establish a dedicated AI governance council comprising leaders from business, legal, compliance, and strategy functions to define internal policies, operationalize risk taxonomies, and implement adaptive accountability mechanisms through regular audits and iterative oversight.
    • Integrate global and local governance requirements: Embed compliance mechanisms that account for cross-border regulatory divergence, sectoral mandates, and anticipated legal developments in under-regulated regions, maintaining readiness amid shifting global norms.
    • Operationalize new AI governance innovations: Adopt new data and AI frameworks and approaches such as SHADES and annotated-demand-levels (ADeLe) to proactively identify cultural, linguistic, and systemic biases and stress-test AI systems for reliability and explainability.

The Government Mandate

As AI matures into a foundational technology, governments must mitigate risks and cultivate the conditions for responsible growth. While developed nations set benchmarks through technical standards and oversight agencies, others must accelerate efforts to avoid regulatory lag. This demands cohesive national action plans, cross-border alignment, and inclusive regulatory ecosystems. To steer AI development responsibly, governments should focus on the following:

    • Formulate AI-native regulatory architecture: Replace outdated IT and data laws with purpose-built AI legislation that embeds fairness, transparency, and safety-by-design principles.
    • Establish multistakeholder consortia and governance bodies: Partner with academia, civil society, and industry leaders to cocreate standards and oversee implementation through independent regulatory institutions.
    • Advance public literacy on emerging AI frontiers: Launch national programs to build awareness of generative AI, agentic systems, and their socio-technical risks, equipping policymakers and citizens alike for the AI age.

Case in Point: The EU AI Act as a Regulatory Benchmark

The EU AI Act stands as the world’s first comprehensive legal framework for AI, designed to safeguard fundamental rights while fostering innovation. Its structured, risk-based approach sets a high bar for transparency, accountability, and proportionality, making it a viable gold standard for other nations to contextualize and emulate.

Key pillars of the EU framework include the following:

    • Risk-tiered classification: AI systems are segmented into unacceptable, high, limited, and minimal risk categories, with each tier facing distinct regulatory obligations.
    • High-risk system compliance: The Act mandates pre-market conformity assessments, documentation, human oversight protocols, and post-market monitoring for AI deployed in critical sectors such as education, healthcare, and law enforcement.
    • Transparency obligations: AI systems interacting with humans or using biometric identification must disclose their nature and obtain consent when needed.
    • Regulatory sandboxes: EU member states are empowered to establish controlled environments that enable innovation under supervision, allowing enterprises to test AI systems while meeting legal and ethical standards.
    • Strong enforcement and penalties

This Act serves as a reference model for governments seeking a mature, enforceable balance between innovation, public trust, and rights preservation.

Looking Ahead: Bridging the Regulatory Fragmentation

To close the governance gaps and prepare for next-generation AI challenges, governments and enterprises must adopt strategies that go beyond compliance and enable global cohesion:

    • Modular, sector-specific AI regulations: Governments should codevelop modular, domain-specific governance templates across sectors such as healthcare, telecom, and finance, while enabling enterprises to operationalize these standards through tailored compliance roadmaps.
    • Regional governance alignment charters: Intergovernmental platforms such as BIMSTEC, SAARC, BRICS, and ASEAN should formalize regulatory alignment frameworks, easing legal interoperability and lowering compliance barriers for enterprises operating across borders.
    • Model provenance and lineage registries: Public and private efforts must establish registries to log AI models’ development history, training sources, and audit trails. This will allow enterprises to demonstrate regulatory alignment and mitigate downstream liability.
    • Simulated stress-testing of AI policies: Regulators and leading enterprises should jointly build controlled simulation zones to pressure-test AI systems and governance protocols against risks such as unintended model behavior and adversarial misuse.

By Abhisekh Satapathy, Principal Analyst, Avasant