Home » Aerospace and defense » Fortifying Generative AI: Microsoft’s Vigilance on Security, Trust, and Governance
Generative AI has seized the imagination of users across industries. What sets it apart is its capacity to democratize AI, bridging the gap between tech and non-tech individuals in comprehending its applications. Its utilization is poised for exponential growth as more use cases are explored.
However, like any digital technology, generative AI adoption is a double-edged sword, offering great potential for innovation and creativity while raising significant concerns regarding security, trust, and governance. The expanding risk landscape becomes apparent as enterprises increasingly experiment with generative AI solutions to optimize their operations.
Now, the dichotomy is that with the ever-evolving threat landscape, the CIOs and CISOs recognize the inability to keep pace with emerging risks solely through traditional security measures. Consequently, there’s a growing realization that certain decision-making responsibilities must be entrusted to AI and generative AI models.
Yet, this transition has its complications. Generative AI inadvertently lowers the barrier of entry for cybercriminals. It opens avenues for developing sophisticated cyberattacks, including business email compromise phishing campaigns, exploitation of zero-day vulnerabilities, probing critical infrastructure, and the proliferation of malware.
Interestingly, our recent paper, “Generative AI-Based Security Tools are, in Effect, Fighting Fire With Fire,” underscores the need for a nuanced approach in deploying generative AI for cybersecurity purposes. While generative AI holds immense promise in bolstering cybersecurity defense, its deployment requires careful consideration of the associated risks and proactive mitigation strategies.
At the Microsoft AI Tour Mumbai event on January 31, 2024, Vasu Jakkal, corporate vice president of Microsoft Security, Compliance, Identity & Privacy, took the stage following Microsoft President India and South Asia Puneet Chandok’s keynote address, emphasizing Microsoft’s commitment to addressing critical concerns surrounding security, trust, and governance in generative AI.
Over the past year, there have been significant investments in large language models (LLMs) for security applications, both by leading cloud service providers and managed security service providers. While players like Google and Microsoft have launched new generative AI security products, IBM, CrowdStrike, and Tenable have integrated features into their existing products. Though each product has capabilities, the focal point is to automate threat hunting and prioritize breach alerts.
At the event, Vasu highlighted how Microsoft Security Copilot, an AI assistant-driven security platform powered by GPT-4, leverages Microsoft’s extensive security library of solutions, such as Microsoft Defender and Sentinel, and proprietary data to enhance threat detection speed and identify vulnerabilities, addressing process gaps overlooked by other methods. Additionally, it summarizes security breach details based on prompts.
Scott Woodgate, senior director of Microsoft Security, took this thought forward and highlighted the following key aspects while effectively addressing AI-specific risks and ensuring the secure development of AI applications:
Clearly, governance stands as the cornerstone of these efforts, ensuring that AI applications are developed and deployed in a secure and compliant manner.
The strategy outlined by Vasu and Scott for generative AI-based security solutions resonates with prevailing industry trends in adopting security solutions aligned with the Zero Trust principles. These principles are foundational in IT infrastructure modernization and enable enterprises to treat all traffic as potential threats and enforce the principle of least privileges (POLP).
In our Avasant Cybersecurity Services 2023 RadarView, published in July 2023, we highlighted three key considerations for enterprise customers adopting a zero-trust strategy:

The figure above illustrates the percentage of enterprises prioritizing these three considerations, emphasizing the importance of taking a holistic view of the Zero Trust framework across various attack vectors: users, devices, data, network, infrastructure, and applications.
While generative AI offers promising capabilities, it supplements existing security measures rather than fully replacing them until they mature sufficiently in the market.
As enterprises scout for generative AI security solutions, it’s important to recognize that LLM-based systems are not a panacea. Like any technology, they come with their own set of trade-offs. With regulatory landscapes constantly evolving, staying abreast of technological advancements is more critical than ever. Despite the risks, organizations must embrace these advancements and invest in advanced security solutions tailored to safeguard AI systems.
By Gaurav Dewan, Research Director, Avasant
Avasant’s research and other publications are based on information from the best available sources and Avasant’s independent assessment and analysis at the time of publication. Avasant takes no responsibility and assumes no liability for any error/omission or the accuracy of information contained in its research publications. Avasant does not endorse any provider, product or service described in its RadarView™ publications or any other research publications that it makes available to its users, and does not advise users to select only those providers recognized in these publications. Avasant disclaims all warranties, expressed or implied, including any warranties of merchantability or fitness for a particular purpose. None of the graphics, descriptions, research, excerpts, samples or any other content provided in the report(s) or any of its research publications may be reprinted, reproduced, redistributed or used for any external commercial purpose without prior permission from Avasant, LLC. All rights are reserved by Avasant, LLC.
Login to get free content each month and build your personal library at Avasant.com