Introduction
Artificial intelligence (AI) has been a cornerstone of technological innovation for decades, but the integration of responsible AI (RAI) principles with AI technology has taken on unprecedented urgency in the past year. Historically, responsible AI guidelines—emphasizing fairness, explainability, inclusivity, reliability, and transparency—have been part of the AI discourse, gaining prominence in 2018 with the advent of dedicated RAI platforms such as Modulos and Fiddler AI.
But did enterprises invest in responsible AI as part of their AI implementations? The answer is a resounding no. Only a handful of forward-thinking enterprises, those ahead of the digital transformation curve, truly grasped the significance of responsible AI, acknowledging the growing awareness of AI’s potential risks and ethical dilemmas. Most companies only consider responsible AI when facing compliance issues or external pressure, treating it as a cost center and a nice-to-have feature. However, the emergence of generative AI tools such as Gemini and ChatGPT has dramatically altered the AI threat landscape, introducing new risks and heightening the urgency for robust responsible AI practices.
-
- Intellectual property (IP) misappropriation: Generative AI’s capacity to create text, images, video, and audio from public data has ignited intense debates over copyright infringement and intellectual property theft. Ensuring that AI-generated content complies with IP laws has become a critical and daunting task. A stark illustration of this was observed in May 2024, when a prominent Hollywood actress sued OpenAI for allegedly using her voice to train its chatbot without her consent.
- Hallucinations and inaccurate responses: Generative AI models are notorious for producing outputs that, while plausible, are often incorrect or nonsensical—a phenomenon known as hallucination. These inaccuracies can lead to the spread of misinformation and flawed decision-making. In April 2024, Noyb, an Austrian non-profit organization, filed a formal complaint against OpenAI with the regional privacy authority, accusing the company of generating inaccurate responses and violating GDPR.
- Cyberattacks: Generative AI has not only expanded the horizons of creativity but also those of cyber threats. It has empowered cybercriminals to execute sophisticated attacks, such as advanced phishing schemes using tools such as WormGPT and PoisonGPT, and to create hyper-realistic deepfakes that can bypass traditional security filters. These attacks have infiltrated various sectors, including mailrooms and supply chains, leading to severe data breaches and operational disruptions.
Enterprise Adoption of Responsible AI
As the capabilities of large language models (LLMs) advance—incorporating emotional intelligence, multimodality, and extended context lengths in the pursuit of artificial general intelligence (AGI)—enterprise awareness of the critical role that responsible AI plays has surged. Furthermore, the increase in AI mishaps has intensified concerns, making it clear that ignoring responsible AI principles can be catastrophic for brand image and legal standing. Several high-profile incidents underscore the consequences of neglecting responsible AI.
-
- In February 2024, the British Columbia Civil Resolution Tribunal held Air Canada accountable for incorrect information provided by its generative AI chatbot, resulting in damages paid to the customer.
- In December 2023, General Motors was forced to shut down its generative AI virtual assistant on its Chevy dealer website after hackers exploited it, allowing the purchase of a $76,000 automobile for just $1.
These incidents have had a ripple effect, prompting more organizations to align their AI solutions with responsible AI guidelines. As a result, approximately 46% of enterprises are now investing in responsible AI.
Among these enterprises, maturity levels in responsible AI adoption vary widely. With generative AI projects transitioning from proof of concept (POC) to production, the discourse around ethical, fair, and accountable AI is intensifying. Establishing AI review boards and implementing ethical AI frameworks are becoming essential. Organizations are bringing together teams from technology, legal, risk, and business units to form centralized AI governance boards. These boards streamline AI and generative AI initiatives by assessing, validating, and approving use cases against responsible AI principles and sharing best practices and solution accelerators with business/global heads.
For instance, Plexus Worldwide has formed a team comprising IT, legal, and HR representatives to oversee AI governance and policy development. This team sets the company’s risk tolerance, defines acceptable use cases and restrictions, and determines necessary disclosures. They have also drafted a policy outlining roles and responsibilities, scope, context, acceptable use guidelines, risk tolerance, and governance.
PepsiCo, which aims to be a leader in industrialized AI applications and responsible AI, has collaborated with the Stanford Institute for Human-Centered Artificial Intelligence to shape industry standards focused on smart manufacturing, supply chain, and sustainability. They are also evaluating IBM’s watsonx.governance platform for centralized AI life cycle risk management and observability, moving away from individual business teams managing their own models and governance.
Conversely, companies such as Home Credit are in the early stages of their AI journey. With the rollout of AI pilot projects, such as GitHub Copilot for coding and documentation, these companies are initiating discussions about creating ethical governance structures to ensure compliance with their codes of conduct.
Conclusion
The shift toward responsible AI is no longer optional. As enterprises recognize the profound impact of ethical AI practices, they are taking definitive steps to integrate responsible AI into their core operations, ensuring AI is developed and deployed with integrity and accountability. As AI technology advances, newer risks continue to emerge, including the infringement of personal and public rights with unauthorized utilization of various aspects of an individual’s persona. While laws currently safeguard creative content, they will need to evolve to address these emerging threats.
Enterprises must prioritize their core operations and revenue growth strategies, leaving the complex and evolving domain of Gen AI and AI guidelines and data protection laws to specialists. This is where service providers with mature capabilities in responsible AI come into play. A collaboration model is ideal, where a company establishes a responsible AI framework aligned with its mission, vision, and code of conduct, and service providers enhance it based on industry and region-specific guidelines.
These service providers are often part of responsible AI consortiums and agencies, such as the World Economic Forum’s AI Governance Alliance and the National Institute of Standards and Technology’s AI Safety Institute Consortium. They navigate the evolving regulatory landscape and are equipped with the tools, talent, and technologies to enable responsible AI. Their expertise can significantly expedite the development and execution of an organization’s AI implementations, ensuring that AI is used ethically and responsibly.
By Chandrika Dutt, Associate Research Director, Avasant, and Abhisekh Satapathy, Lead Analyst, Avasant