Home » artificial-intelligence-technologies » Generative AI Gold Rush: Mitigating the Risks
Generative AI is one of the most hyped technologies in the current AI landscape, and rightly so. It brings tremendous efficiency gains and data monetization opportunities to the table. Although the large language model (LLM), the foundation of generative AI, has been around since the inception of transformer models in 2017, its adoption has skyrocketed with user-friendly and intuitive no-code platforms such as ChatGPT. But with great power comes great responsibility. The rapid advancement of generative AI has heightened societal risks and dilemmas, including racial or social biases, misinformation, ethical and transparency concerns, and data privacy threats.
Figure 1: The most prominent risks and challenges enterprises face with generative AI.

Ethics, fairness, and explainability are long-standing issues in AI, typically managed through a responsible AI framework. Although C-level discussions frequently address responsible AI, its implementation has been slow due to the perceived lack of immediate financial impact. It is often seen as a cost rather than an investment. However, with the advent of generative AI, ignoring AI ethics is no longer an option. Responsible AI, addressing language toxicity, output bias, and other such issues, will be central to all generative AI discussions.
The new risks posed by generative AI around data protection, copyright, and cybersecurity are critical and require immediate attention. Data privacy and security have emerged as the most significant barriers for enterprises adopting generative AI and moving projects from the experimentation stage to production. The biggest challenge enterprises face is the risk of uploading data, such as the source code or even sales figures, to a chatbot that might learn from such proprietary information and retain or share it outside the company or with employees lacking proper authorization. This is particularly acute in heavily regulated industries like healthcare and banking, where data protection is paramount and reputations are at stake. As a result, many enterprises are proactively incorporating security guardrails to prevent data leakage, toxic or harmful content, code security vulnerabilities, and IP infringement risks when using generative AI platforms. This has led to the emergence of new features in security solutions, such as the following:
Concerning the high environmental impact of LLMs, various techniques are being evaluated to develop more economical and faster LLM models while reducing their environmental footprint. For instance, LiGO (linear growth operator) is an innovative technique developed by MIT researchers to minimize the environmental impact of training LLMs by cutting the computational cost by half. It works by initializing the weights of a larger model using those of smaller pretrained ones, facilitating efficient neural network scaling. This not only maintains the performance benefits of larger models but also requires substantially reduced computational cost and training time compared to training a large model from scratch.
The path ahead for generative AI is anything but certain. The pivotal question is whether the journey of generative AI will parallel that of technologies like self-driving cars, which, despite groundbreaking advancements and heavy investments, face a myriad of technical, regulatory, and ethical challenges to real-world deployment. Some of the efforts described below reflect a growing awareness and proactive approach towards managing the risks associated with generative AI across tech giants, enterprises, and governments.
Tech giants
Enterprises
Governments
Many countries, including the United States and China, and the countries under the European Union, have introduced bills and regulations monitoring generative AI usage. These address copyrighted content generation and transfer, AI content watermarking, and automated decision-making backed by ethical AI.

Figure 2: Most countries are developing guardrails to minimize risks associated with generative AI.
The EU AI Act is currently in its final stages of deliberation. If enacted in its current form, this act will lead to strict conformity standards that all parties—providers, deployers, and users—must adhere to, as there will be no risk transfer to anyone. Meanwhile, in the US, regulations vary by state, with California focusing on online transparency to prevent the use of hidden bots in sales and elections and Texas implementing a data protection and privacy act. As these regulations become more stringent, companies will be forced to take responsibility across all facets of AI, not just generative AI. This highlights a global trend towards increased accountability and regulation in the AI sector.
Furthermore, enterprises should remain flexible regarding the use of hyperscalers and on-premises solutions. The myriad use cases for generative AI, combined with the evolving landscape of data protection, processing, and storage laws, may restrict the use of certain hyperscalers and necessitate on-premises solutions. This will likely result in a hybrid generative AI strategy, combining both on-premises and cloud-based solutions. To prepare for this eventuality, enterprises should design adaptable architectures capable of switching between different compute types, whether using a hyperscaler, a SaaS provider, or an on-premises solution.
Generative AI is quickly becoming a competitive necessity for enterprises. Those who delay its adoption risk falling behind or going out of business. As regulations and reforms are still underway, these proactive steps will help enterprises leverage the benefits of generative AI while minimizing its associated risks.
Seek data protection assurances: Initially, companies hesitated to adopt cloud services until AWS received CIA approval in 2013, significantly boosting trust and adoption. Similarly, Azure OpenAI recently received US government approval and announced new guidelines guaranteeing data security during transit and storage, confirming that prompts and company data are not used for training. This endorsement will likely reduce resistance to adopting Azure OpenAI and, over time, solutions by other LLM providers. As the landscape evolves, using generative AI products will pose data security risks similar to using SaaS products. Adopting generative AI should be relatively easy for companies already comfortable with SaaS platforms. Alternatively, companies can opt for on-premises, open-source models, which may be more costly but preferred by sectors like finance and insurance that are traditionally reluctant to send data outside their network.
Implement content moderation and security checks: Enterprises must ensure that any LLM model they use, whether closed, open, or a narrow transformer, undergoes comprehensive security checks. These include protection against prompt injection, jailbreak, semantic, and HopSkipJump attacks and checks for personally identifiable information, IP violations, and internal policies. These checks should be mandated from when a user inputs a prompt until the foundation model generates a response. Similarly, the output should undergo the same checks before being returned to the user. Technical components like scanners, moderators, and filters will further help secure and validate the information. Context-specific blocking, data hashing before input, and building exclusion lists like customer logos, personally identifiable information, and so on will also be essential for data preparation.
In the realm of generative AI, it is imperative that companies collaborate with service providers possessing specialized expertise. Given the unique challenges posed by generative AI, from creating content compliant with regulations to ensuring the generated data respects privacy standards, partnering with knowledgeable providers is crucial. Such experts can navigate the intricacies of data privacy and content moderation specific to generative AI, ensuring regulatory compliance and ethical content generation.
By Chandrika Dutt, Research Leader, Avasant and Abhisekh Satapathy, Senior Analyst, Avasant
Avasant’s research and other publications are based on information from the best available sources and Avasant’s independent assessment and analysis at the time of publication. Avasant takes no responsibility and assumes no liability for any error/omission or the accuracy of information contained in its research publications. Avasant does not endorse any provider, product or service described in its RadarView™ publications or any other research publications that it makes available to its users, and does not advise users to select only those providers recognized in these publications. Avasant disclaims all warranties, expressed or implied, including any warranties of merchantability or fitness for a particular purpose. None of the graphics, descriptions, research, excerpts, samples or any other content provided in the report(s) or any of its research publications may be reprinted, reproduced, redistributed or used for any external commercial purpose without prior permission from Avasant, LLC. All rights are reserved by Avasant, LLC.
Login to get free content each month and build your personal library at Avasant.com