Home » Aerospace and defense » Responsible AI: A Strategic Imperative for Enterprises in Generative AI Implementation
Artificial intelligence (AI) has been a cornerstone of technological innovation for decades, but the integration of responsible AI (RAI) principles with AI technology has taken on unprecedented urgency in the past year. Historically, responsible AI guidelines—emphasizing fairness, explainability, inclusivity, reliability, and transparency—have been part of the AI discourse, gaining prominence in 2018 with the advent of dedicated RAI platforms such as Modulos and Fiddler AI.
But did enterprises invest in responsible AI as part of their AI implementations? The answer is a resounding no. Only a handful of forward-thinking enterprises, those ahead of the digital transformation curve, truly grasped the significance of responsible AI, acknowledging the growing awareness of AI’s potential risks and ethical dilemmas. Most companies only consider responsible AI when facing compliance issues or external pressure, treating it as a cost center and a nice-to-have feature. However, the emergence of generative AI tools such as Gemini and ChatGPT has dramatically altered the AI threat landscape, introducing new risks and heightening the urgency for robust responsible AI practices.
As the capabilities of large language models (LLMs) advance—incorporating emotional intelligence, multimodality, and extended context lengths in the pursuit of artificial general intelligence (AGI)—enterprise awareness of the critical role that responsible AI plays has surged. Furthermore, the increase in AI mishaps has intensified concerns, making it clear that ignoring responsible AI principles can be catastrophic for brand image and legal standing. Several high-profile incidents underscore the consequences of neglecting responsible AI.
These incidents have had a ripple effect, prompting more organizations to align their AI solutions with responsible AI guidelines. As a result, approximately 46% of enterprises are now investing in responsible AI.

Among these enterprises, maturity levels in responsible AI adoption vary widely. With generative AI projects transitioning from proof of concept (POC) to production, the discourse around ethical, fair, and accountable AI is intensifying. Establishing AI review boards and implementing ethical AI frameworks are becoming essential. Organizations are bringing together teams from technology, legal, risk, and business units to form centralized AI governance boards. These boards streamline AI and generative AI initiatives by assessing, validating, and approving use cases against responsible AI principles and sharing best practices and solution accelerators with business/global heads.
For instance, Plexus Worldwide has formed a team comprising IT, legal, and HR representatives to oversee AI governance and policy development. This team sets the company’s risk tolerance, defines acceptable use cases and restrictions, and determines necessary disclosures. They have also drafted a policy outlining roles and responsibilities, scope, context, acceptable use guidelines, risk tolerance, and governance.
PepsiCo, which aims to be a leader in industrialized AI applications and responsible AI, has collaborated with the Stanford Institute for Human-Centered Artificial Intelligence to shape industry standards focused on smart manufacturing, supply chain, and sustainability. They are also evaluating IBM’s watsonx.governance platform for centralized AI life cycle risk management and observability, moving away from individual business teams managing their own models and governance.
Conversely, companies such as Home Credit are in the early stages of their AI journey. With the rollout of AI pilot projects, such as GitHub Copilot for coding and documentation, these companies are initiating discussions about creating ethical governance structures to ensure compliance with their codes of conduct.
The shift toward responsible AI is no longer optional. As enterprises recognize the profound impact of ethical AI practices, they are taking definitive steps to integrate responsible AI into their core operations, ensuring AI is developed and deployed with integrity and accountability. As AI technology advances, newer risks continue to emerge, including the infringement of personal and public rights with unauthorized utilization of various aspects of an individual’s persona. While laws currently safeguard creative content, they will need to evolve to address these emerging threats.
Enterprises must prioritize their core operations and revenue growth strategies, leaving the complex and evolving domain of Gen AI and AI guidelines and data protection laws to specialists. This is where service providers with mature capabilities in responsible AI come into play. A collaboration model is ideal, where a company establishes a responsible AI framework aligned with its mission, vision, and code of conduct, and service providers enhance it based on industry and region-specific guidelines.
These service providers are often part of responsible AI consortiums and agencies, such as the World Economic Forum’s AI Governance Alliance and the National Institute of Standards and Technology’s AI Safety Institute Consortium. They navigate the evolving regulatory landscape and are equipped with the tools, talent, and technologies to enable responsible AI. Their expertise can significantly expedite the development and execution of an organization’s AI implementations, ensuring that AI is used ethically and responsibly.
By Chandrika Dutt, Associate Research Director, Avasant, and Abhisekh Satapathy, Lead Analyst, Avasant
Avasant’s research and other publications are based on information from the best available sources and Avasant’s independent assessment and analysis at the time of publication. Avasant takes no responsibility and assumes no liability for any error/omission or the accuracy of information contained in its research publications. Avasant does not endorse any provider, product or service described in its RadarView™ publications or any other research publications that it makes available to its users, and does not advise users to select only those providers recognized in these publications. Avasant disclaims all warranties, expressed or implied, including any warranties of merchantability or fitness for a particular purpose. None of the graphics, descriptions, research, excerpts, samples or any other content provided in the report(s) or any of its research publications may be reprinted, reproduced, redistributed or used for any external commercial purpose without prior permission from Avasant, LLC. All rights are reserved by Avasant, LLC.
Login to get free content each month and build your personal library at Avasant.com