Home » Aerospace and defense » Closing the Gaps: Standardizing Responsible AI Amidst a Fragmented Global Governance Landscape
The global AI governance landscape is fragmented, with over 1,000 proposed and enacted policies spanning 69 countries, resulting in a complex web of inconsistent and often reactive regulations. Laws governing AI are rarely harmonized across borders and are deeply entangled with broader data protection, cybersecurity, and digital governance policies. This legal complexity makes it difficult for enterprises, especially those operating in sensitive sectors like telecom, healthcare, and government services, to implement AI solutions responsibly. In healthcare, AI tools risk exacerbating diagnostic inequities without fairness testing; in public services, opaque algorithms can reinforce systemic exclusion; and in telecom, restrictions around cross-border data sharing introduce significant governance challenges.
This was the focal point of a three-day workshop held in India, organized by the International Telecommunication Union (ITU) in collaboration with the National Institute of Communications Finance (NICF) and representatives from the Bay of Bengal Initiative for Multi-Sectoral Technical and Economic Cooperation (BIMSTEC) nations, including Nepal, Bhutan, Maldives, Sri Lanka, Bangladesh, and India. While the primary emphasis was on AI standardization for the telecom and information and communication technology sectors, the conversations extended to broader concerns of ethical implementation, operational accountability, and cross-border governance challenges.
A recurring theme was the ambiguity surrounding data ownership and responsibility. For instance, an AI system may provide misleading information, as happened when Air Canada’s chatbot incorrectly promised a refund and triggered a legal dispute. In such cases, who bears responsibility? Is it the developer who built the model, the organization that deployed it, or the source of the training data? While the airline was held accountable in this case, liability in similar scenarios can shift depending on jurisdiction, contractual arrangements, and the specifics of the AI life cycle. Without clear attribution, enterprises are left exposed. This challenge grows sharper with the increasing deployment of more autonomous systems such as generative and agentic AI, which introduce new layers of unpredictability and decision-making autonomy.
Enterprises operating within a fragmented AI regulatory landscape face three major operational challenges:
As the global regulatory environment for AI evolves, enterprises and governments play a pivotal role in shaping a future where innovation is balanced with accountability. With a fragmented global landscape ranging from mature legal ecosystems to regulatory voids, responsible AI governance demands strategic foresight, operational discipline, and international collaboration.
Enterprises must navigate contrasting AI regulatory frameworks, where regions like the EU enforce rigorous standards while others, such as Nepal or parts of Africa, remain without formal governance. Responsible AI adoption requires organizations to design governance strategies that are adaptable across jurisdictions and sensitive to emerging risks. To future-proof their AI deployment, enterprises must anchor their efforts around three priorities:
As AI matures into a foundational technology, governments must mitigate risks and cultivate the conditions for responsible growth. While developed nations set benchmarks through technical standards and oversight agencies, others must accelerate efforts to avoid regulatory lag. This demands cohesive national action plans, cross-border alignment, and inclusive regulatory ecosystems. To steer AI development responsibly, governments should focus on the following:
The EU AI Act stands as the world’s first comprehensive legal framework for AI, designed to safeguard fundamental rights while fostering innovation. Its structured, risk-based approach sets a high bar for transparency, accountability, and proportionality, making it a viable gold standard for other nations to contextualize and emulate.
Key pillars of the EU framework include the following:
This Act serves as a reference model for governments seeking a mature, enforceable balance between innovation, public trust, and rights preservation.
To close the governance gaps and prepare for next-generation AI challenges, governments and enterprises must adopt strategies that go beyond compliance and enable global cohesion:
By Abhisekh Satapathy, Principal Analyst, Avasant
Avasant’s research and other publications are based on information from the best available sources and Avasant’s independent assessment and analysis at the time of publication. Avasant takes no responsibility and assumes no liability for any error/omission or the accuracy of information contained in its research publications. Avasant does not endorse any provider, product or service described in its RadarView™ publications or any other research publications that it makes available to its users, and does not advise users to select only those providers recognized in these publications. Avasant disclaims all warranties, expressed or implied, including any warranties of merchantability or fitness for a particular purpose. None of the graphics, descriptions, research, excerpts, samples or any other content provided in the report(s) or any of its research publications may be reprinted, reproduced, redistributed or used for any external commercial purpose without prior permission from Avasant, LLC. All rights are reserved by Avasant, LLC.
Login to get free content each month and build your personal library at Avasant.com