Securing the Digital Frontier: Protecting Enterprises in a Hyperconnected World

August, 2025

At Avasant’s flagship Empowering Beyond Summit 2025, a dynamic panel discussion brought together cybersecurity leaders from academia, healthcare, government, and global technology firms to explore how enterprises can responsibly adopt artificial intelligence at scale. Moderated by Jay Weinstein, Distinguished Fellow at Avasant, the panel featured George Mansoor (Chief Security Officer, California State University), Alexandra Landeggar (Global Head of Cyber Strategy & Transformation, RTX), Manju Naglapur (SVP & General Manager, Unisys), and John Caruthers (Lead Director, BISO, CVS Health). Together, they unpacked the strategic and technical imperatives shaping AI adoption ranging from governance frameworks and privacy safeguards to post-quantum preparedness and the expanding role of cybersecurity in enabling innovation. As regulatory pressures mount and AI becomes more embedded in critical systems, cybersecurity is increasingly recognized not just as a defense mechanism, but as a foundational pillar for responsible, scalable transformation. 

From Hype to Implementation: The Governance Imperative

In today’s hyperconnected economy, where digital ecosystems underpin every facet of enterprise operations, the convergence of cybersecurity and artificial intelligence (AI) has emerged as a defining frontier. The panel agreed that AI is no longer a futuristic experiment. It is embedded in every layer of the modern enterprise, from productivity tools to cloud security stacks. Tools like Microsoft Copilot and Google Workspace AI are already transforming daily workflows across industries, signaling how deeply integrated AI has become in enterprise environments. 

As Jay emphasized, the conversation must move beyond “AI as tool” to recognizing AI as a strategic enabler—and a potential risk vector. The need for internal governance models was echoed throughout, especially as enterprises evolve from isolated proof-of-concepts (POCs) to large-scale operational deployments. 

Manju, representing the system integrator perspective, illustrated this shift: “At the POC stage, AI looks like magic—you get one or two great use cases working with minimal clicks. But once you go to scale, the need for a private AI framework becomes apparent.” Sovereign clouds, AI factories, and internal compliance protocols are emerging in response. These offer greater control over proprietary data, training models, and regulatory exposure. This push for enterprise-grade AI infrastructure highlights the broader need for adaptable governance models across all sectors, including those traditionally more open to experimentation. 

Academic Institutions: Open Yet Guarded

George offered a unique perspective from the university environment where openness and experimentation are cultural norms. “We know our students and faculty will use these tools with or without us,” he said. “So it’s on us to provide the tools and safe guardrails.” His institution is currently exploring use cases like student advising and counseling, but with intentional restraint. 

He highlighted the ethical tension that arises when AI tools begin to influence deeply personal or sensitive areas, such as student support and counseling. Rather than restrict access, his institution is focused on secure enablement—building privacy-conscious frameworks that allow for innovation while maintaining trust. In academic environments where experimentation is encouraged and users tend to be both early adopters and critical thinkers, this measured approach helps balance openness with accountability. 

Data Classification and Model Validation: Key Security Priorities

A recurring challenge discussed was the classification of enterprise data—not just knowing what data exists, but understanding which datasets are sensitive, regulated, or appropriate for training large models. “We’re being asked to negotiate worst-case scenarios,” Manju explained, “with cyber insurance and liability clauses now baked into AI and cloud contracts.” 

Model validation is also rising to the forefront. It is no longer enough to deploy high-performing models; enterprises must certify that models are explainable, compliant, and trustworthy before going into production. Whether using in-house models or third-party services like Microsoft Copilot, organizations are under pressure to prove that AI outputs are reliable, non-biased, and secure. These growing demands around AI governance are unfolding alongside another major shift in enterprise security: the rise of quantum computing. 

Quantum and Post-Quantum Readiness (PQC)

While AI took center stage, the panel also explored PQC—fields that will soon redefine enterprise security. Alexandra suggested these be top priorities for future CISO roadmaps, as encryption standards evolve and quantum risks shift from theoretical to practical. 

Manju added that conversations are already taking place around PQC, even in consulting contracts. She noted the importance of security “from firmware to software,” as enterprises prepare for new chip architectures, cryptographic standards, and state-level regulations that demand greater technical rigor. 

CISO Mindset Shift: From Gatekeeper to Business Enabler

Perhaps the most transformative insight came in the redefinition of the cybersecurity role. “CISOs and cybersecurity teams should not be saying ‘no’ to the business,” Jay said. Instead, they must become enablers of innovation, embedding themselves in AI conversations early to provide frameworks that support—not stifle—digital transformation. 

This includes pushing vendors for secure AI features, ensuring data protection frameworks are in place, and supporting business units in AI adoption decisions. “Sometimes, the simplest answer is to bring AI in-house,” Jay added, “especially when tools like Copilot come with prebuilt security. But even then, it’s our job to make sure the guardrails are up, and the business is supported.” 

Key Takeaways

    • Cybersecurity leaders must act as strategic partners, enabling safe and responsible innovation across the enterprise. 
    • Private AI and Sovereign Clouds are gaining traction as enterprises seek to scale AI securely and compliantly. 
    • Data governance, classification, and model validation are now board-level concerns in AI adoption. 
    • Academic institutions are balancing open experimentation with frameworks that ensure ethical, secure use of AI tools. 
    • Quantum readiness and contractual resilience are future-proofing priorities, with PQC and cyber liability clauses becoming standard. 

Conclusion

As enterprises navigate the fast-evolving AI landscape, the conversation has shifted from whether AI should be adopted to how it can be implemented responsibly, ethically, and securely. The organizations that will emerge as leaders are those that invest in strong governance frameworks, proactive security strategies, and foster seamless collaboration between technology, business, and compliance teams. 

In his closing remarks, Jay Weinstein emphasized a critical shift: “CISOs and cybersecurity teams should not be saying no to the business. We need to securely enable the business moving forward.” This underscores the evolving role of cybersecurity, not soley as a safeguard, but as a strategic enabler of innovation. 

The Empowering Beyond Summit reinforced that AI is no longer a distant possibility, but an integral part of the enterprise. Today’s leaders must shape its development with clarity, confidence, and integrity, ensuring AI drives meaningful transformation.


By Faith Persad, Intern