AI has become the defining force for digital transformation. As Jon Ramsey, vice president (VP) and general manager (GM) of Google Cloud Security, highlighted at the Google Cloud Security Summit 2025 that 91% of organizations have already launched AI initiatives, ranging from supply chain optimization to customer experience innovation. The consensus across industries is clear: AI’s transformative potential is undeniable. However, scaling these initiatives from pilot projects to enterprise-wide adoption is proving difficult. The single biggest hurdle is security—the ability to safeguard sensitive data, prevent adversarial misuse of AI, and build trust with regulators and customers.
The Security Imperative for AI
According to Avasant’s The Evolution of Synchronous AI Agents report, 65% of organizations cite data security and privacy safeguards as the top barrier to scaling agentic AI initiatives (see Figure 1 below). As these AI agents handle sensitive data, organizations prioritize solutions with strong privacy protections to mitigate risks. Legal and regulatory compliance for autonomous agents is closely followed, where 58% of organizations expressed concern about the absence of a clear legal framework.

Figure 1: Factors influencing an organization’s decision to invest in AI agents
While AI promises unparalleled opportunities, it simultaneously creates new vulnerabilities and challenges for security leaders:
-
- Dual challenge: As organizations adopt LLMs, copilots, and agentic AI tools, they face a rapidly expanding attack surface with new risks such as shadow AI, misconfigurations, and autonomous agents acting as non-human identities operating with privileged access. Security teams must leverage AI to strengthen their defense capabilities while also protecting AI systems themselves from misuse, adversarial manipulation, and supply chain risks.
- Operational strain: Security operations centers (SOCs) continue to struggle with alert fatigue, talent shortages, and fragmented tools. Avasant Cybersecurity Services 2024 Market Insights™ shows 67% of SOC teams report being overwhelmed by the sheer volume of security alerts owing to multipoint products, which in turn, leads to false positives and prioritization challenges.
- Trust and compliance: Enterprises face rising regulatory pressure around data sovereignty and AI governance. Avasant’s Responsible AI Platforms 2025 Market Insights™ note that over 1,000 AI-related regulations have been enacted across 69 countries, with the US alone introducing 59 in 2024, marking a 2.5x increase from 2023. These developments highlight that verifiable trust and explainability will soon be non-negotiable.
Together, these factors make organizations hesitant to scale AI adoption, despite its strategic business value. So, the question facing today’s enterprises is: How can they secure their AI-driven digital future while building resilience and operational efficiency?
The path forward is adopting a “Secure by Design” AI strategy that involves embedding security early and comprehensively across the AI life cycle. This approach rests on two key pillars:
-
- AI for Security: Redefining SOC operations with agentic AI to give defenders a sustainable advantage.
- Security for AI: Protecting AI ecosystems, agents, and models with built-in guardrails and supply chain safeguards.
AI-driven Transformation for SOC Security
Google positioned its Agentic SOC vision as the future of security operations. Payal Chakravarty, director of Product Management for Google SecOps, explained that AI agents function as “mini security analysts,” performing tasks such as continuous alert triage, anomaly investigation, new detection generation, and threat hunting. Instead of replacing humans, these AI agents augment their work by taking on the repetitive, data-heavy tasks that drain analyst productivity.
The summit showcased compelling examples:
-
- Triage and investigation agents that analyze alerts, collect evidence, and provide verdicts with confidence scores, reducing the mean time to investigate.
- Detection engineering agents that automatically fine-tune rules and identify coverage gaps.
- Threat-hunting agents that proactively hunt for emerging threats continuously and surface reports.
Vodafone’s Cybersecurity Technology Strategy and IT Architecture Director Emma Smith shared how her team consolidated global network data with Google Cloud Security Operations (SecOps) to improve monitoring at scale. By unifying visibility across multicloud and on-premises systems, Vodafone applied AI/ML to both security events and posture management. With the EPIC data lake on GCP integrated into SecOps, analysts gained a single platform that simplified workflows, improved efficiency, and enabled faster response to threats.
Similarly, Hector Peña, senior director of Information Security at Apex Fintech Solutions, highlighted how Google SecOps with Gemini has significantly improved efficiency in threat detection and response. Tasks like writing regular expressions, which once took up to an hour, are now completed in seconds, reducing investigation time from several hours to under 30 minutes. With Unified Data Model (UDM) logging simplifying ingestion and analysis, analysts can shift their focus from repetitive tasks to more advanced security workflows.
While these outcomes are impressive, they are not exclusive to Google. Competitors such as Microsoft and CrowdStrike are also integrating AI-driven automation into SOC platforms:
-
- Microsoft’s Copilot for Security: Provides real-time recommendations for incident investigation, triaging, and remediation.
- CrowdStrike’s Charlotte AI: Enables analysts to investigate threats using natural language queries, cross-reference threat intelligence, and autonomously answer investigative questions. It recently introduced AI for SecOps readiness assessments across detection, investigation, and response.
Google’s differentiator lies in combining consumer-scale telemetry (billions of endpoints across Gmail, YouTube, and Chrome) with Mandiant’s incident response expertise, creating a unique flywheel of data and frontline intelligence.
After decades of reactive defense, AI is finally giving the defender an advantage. However, enterprises should evaluate ROI critically, as outcomes depend on integration maturity, analyst training, and data quality. The agentic SOC is not a silver bullet, but it offers a credible path toward operational resilience.
Securing AI Ecosystems
The second pillar focuses on securing AI itself. As organizations adopt generative AI for customer experience, employee productivity, and software development, the attack surface expands dramatically. Non-human identities—AI agents, workloads, and service accounts—already outnumber humans 45:1 in enterprise environments, creating systemic risk if mismanaged.
Naveed Makhani, product lead for AI Security Products at Google, highlighted that autonomous agents introduce new risks, including:
-
- Indirect prompt injection: Attacks where malicious instructions are embedded in documents. Though the agent is trying to be helpful and reads that document, it is tricked into exploiting the data at machine speed.
- Tool poisoning: An attacker can compromise a legitimate tool. For instance, an agent used to create an opportunity in the CRM tool may actually end up sending confidential customer data to an attacker-controlled site.
Addressing these interconnected risks requires comprehensive AI protection across the enterprise. Google’s security-first initiatives include:
-
- AI Protection: Introduced in March 2025, this solution safeguards the entire AI life cycle. It helps organizations discover AI assets, assess risks, and unify monitoring through the Security Command Center. By combining Google and Mandiant threat intelligence, AI Protection enables visibility into vulnerabilities, runtime threats, and misconfigurations across cloud and on-premises environments, ensuring responsible and secure AI adoption.
- Model Armor: It is a core capability within Google Cloud’s AI Protection suite that safeguards generative AI models against prompt injections, jailbreaks, sensitive data leaks, and other emerging threats. It provides runtime protection across clouds and integrates natively with security operations to help organizations secure AI usage at scale.
Additionally, Google announced the extension of Model Armor to Agentspace, its platform for deploying AI agents that an employee can use directly across business workflows, and Vertex AI, a unified platform for building Gen AI models.
-
- Model Context Protocol (MCP) Ecosystem: Google is advancing an open security ecosystem by adopting the Model Context Protocol, a standard that enables LLMs to securely connect with external tools and data. Through partnerships with Wiz, CrowdStrike, and others, MCP extends AI-driven security orchestration across the best-of-breed platforms, ensuring interoperability without compromising enterprise controls.
Snap’s Head of Infrastructure Security, Shrikant Pandhare, shared how the company is adopting AI Protection to gain visibility into its AI environment through automated asset discovery and risk scoring. Snap also found Model Armor effective in defending against jailbreak attempts, protecting sensitive data on third-party LLMs, and detecting abuse language, strengthening its secure AI adoption.
Industry peers are also advancing AI security. Recently, SentinelOne acquired Prompt Security to enhance its AI-native Singularity™ Platform with real-time generative AI and agentic AI protection. The integration will give IT and security teams visibility into AI usage, enforce policy-based controls against prompt injection and data leakage, and offer model-agnostic coverage across all major LLMs via the MCP.
Similarly, CyberArk has launched a Secure AI agents solution that extends its security controls to autonomous AI agents. The approach treats each agent as a privileged identity, applying access restrictions, credential management, and continuous monitoring to reduce risks such as unauthorized actions, misuse, or data exposure in agentic AI environments.
AI cannot scale safely without security built in at every layer—from training data and model deployment to agent-to-agent interactions. Google’s proposals are strong, but success depends on execution, ecosystem adoption, and regulatory alignment. Securing AI is no longer optional; it is a prerequisite for trust, compliance, and business resilience.
Securing the Future: AI and Security in Tandem
The Google Cloud Security Summit 2025 underscored a clear reality: enterprises cannot fully realize AI’s potential without advancing their security posture in parallel. Google leaders emphasized that security must evolve alongside AI to unlock scalable, resilient, and trusted adoption.
Key insights include:
-
- AI for Security: Agentic SOCs provide a credible path to scale, efficiency, and faster threat response. Their success, however, hinges on how well organizations integrate these capabilities into existing workflows and train analysts to leverage AI effectively.
- Security for AI: Protecting models, agents, and AI ecosystems is essential as adversaries innovate. Emerging standards like MCP will be critical in enabling interoperability while avoiding vendor lock-in.
The future of enterprise AI is “Secure by Design.” Embedding security throughout the AI life cycle will help organizations address threats and foster innovation with confidence. While Google has articulated a compelling vision, enterprises must validate these claims with measurable outcomes, customer experiences, and ecosystem alignment before committing at scale.
By Gaurav Dewan, Research Director
