Home » Aerospace and defense » Securing the Digital Frontier: Protecting Enterprises in a Hyperconnected World
At Avasant’s flagship Empowering Beyond Summit 2025, a dynamic panel discussion brought together cybersecurity leaders from academia, healthcare, government, and global technology firms to explore how enterprises can responsibly adopt artificial intelligence at scale. Moderated by Jay Weinstein, Distinguished Fellow at Avasant, the panel featured George Mansoor (Chief Security Officer, California State University), Alexandra Landeggar (Global Head of Cyber Strategy & Transformation, RTX), Manju Naglapur (SVP & General Manager, Unisys), and John Caruthers (Lead Director, BISO, CVS Health). Together, they unpacked the strategic and technical imperatives shaping AI adoption ranging from governance frameworks and privacy safeguards to post-quantum preparedness and the expanding role of cybersecurity in enabling innovation. As regulatory pressures mount and AI becomes more embedded in critical systems, cybersecurity is increasingly recognized not just as a defense mechanism, but as a foundational pillar for responsible, scalable transformation.
In today’s hyperconnected economy, where digital ecosystems underpin every facet of enterprise operations, the convergence of cybersecurity and artificial intelligence (AI) has emerged as a defining frontier. The panel agreed that AI is no longer a futuristic experiment. It is embedded in every layer of the modern enterprise, from productivity tools to cloud security stacks. Tools like Microsoft Copilot and Google Workspace AI are already transforming daily workflows across industries, signaling how deeply integrated AI has become in enterprise environments.
As Jay emphasized, the conversation must move beyond “AI as tool” to recognizing AI as a strategic enabler—and a potential risk vector. The need for internal governance models was echoed throughout, especially as enterprises evolve from isolated proof-of-concepts (POCs) to large-scale operational deployments.
Manju, representing the system integrator perspective, illustrated this shift: “At the POC stage, AI looks like magic—you get one or two great use cases working with minimal clicks. But once you go to scale, the need for a private AI framework becomes apparent.” Sovereign clouds, AI factories, and internal compliance protocols are emerging in response. These offer greater control over proprietary data, training models, and regulatory exposure. This push for enterprise-grade AI infrastructure highlights the broader need for adaptable governance models across all sectors, including those traditionally more open to experimentation.
George offered a unique perspective from the university environment where openness and experimentation are cultural norms. “We know our students and faculty will use these tools with or without us,” he said. “So it’s on us to provide the tools and safe guardrails.” His institution is currently exploring use cases like student advising and counseling, but with intentional restraint.
He highlighted the ethical tension that arises when AI tools begin to influence deeply personal or sensitive areas, such as student support and counseling. Rather than restrict access, his institution is focused on secure enablement—building privacy-conscious frameworks that allow for innovation while maintaining trust. In academic environments where experimentation is encouraged and users tend to be both early adopters and critical thinkers, this measured approach helps balance openness with accountability.
A recurring challenge discussed was the classification of enterprise data—not just knowing what data exists, but understanding which datasets are sensitive, regulated, or appropriate for training large models. “We’re being asked to negotiate worst-case scenarios,” Manju explained, “with cyber insurance and liability clauses now baked into AI and cloud contracts.”
Model validation is also rising to the forefront. It is no longer enough to deploy high-performing models; enterprises must certify that models are explainable, compliant, and trustworthy before going into production. Whether using in-house models or third-party services like Microsoft Copilot, organizations are under pressure to prove that AI outputs are reliable, non-biased, and secure. These growing demands around AI governance are unfolding alongside another major shift in enterprise security: the rise of quantum computing.
While AI took center stage, the panel also explored PQC—fields that will soon redefine enterprise security. Alexandra suggested these be top priorities for future CISO roadmaps, as encryption standards evolve and quantum risks shift from theoretical to practical.
Manju added that conversations are already taking place around PQC, even in consulting contracts. She noted the importance of security “from firmware to software,” as enterprises prepare for new chip architectures, cryptographic standards, and state-level regulations that demand greater technical rigor.
Perhaps the most transformative insight came in the redefinition of the cybersecurity role. “CISOs and cybersecurity teams should not be saying ‘no’ to the business,” Jay said. Instead, they must become enablers of innovation, embedding themselves in AI conversations early to provide frameworks that support—not stifle—digital transformation.
This includes pushing vendors for secure AI features, ensuring data protection frameworks are in place, and supporting business units in AI adoption decisions. “Sometimes, the simplest answer is to bring AI in-house,” Jay added, “especially when tools like Copilot come with prebuilt security. But even then, it’s our job to make sure the guardrails are up, and the business is supported.”
As enterprises navigate the fast-evolving AI landscape, the conversation has shifted from whether AI should be adopted to how it can be implemented responsibly, ethically, and securely. The organizations that will emerge as leaders are those that invest in strong governance frameworks, proactive security strategies, and foster seamless collaboration between technology, business, and compliance teams.
In his closing remarks, Jay Weinstein emphasized a critical shift: “CISOs and cybersecurity teams should not be saying no to the business. We need to securely enable the business moving forward.” This underscores the evolving role of cybersecurity, not soley as a safeguard, but as a strategic enabler of innovation.
The Empowering Beyond Summit reinforced that AI is no longer a distant possibility, but an integral part of the enterprise. Today’s leaders must shape its development with clarity, confidence, and integrity, ensuring AI drives meaningful transformation.
By Faith Persad, Intern
Avasant’s research and other publications are based on information from the best available sources and Avasant’s independent assessment and analysis at the time of publication. Avasant takes no responsibility and assumes no liability for any error/omission or the accuracy of information contained in its research publications. Avasant does not endorse any provider, product or service described in its RadarView™ publications or any other research publications that it makes available to its users, and does not advise users to select only those providers recognized in these publications. Avasant disclaims all warranties, expressed or implied, including any warranties of merchantability or fitness for a particular purpose. None of the graphics, descriptions, research, excerpts, samples or any other content provided in the report(s) or any of its research publications may be reprinted, reproduced, redistributed or used for any external commercial purpose without prior permission from Avasant, LLC. All rights are reserved by Avasant, LLC.
Login to get free content each month and build your personal library at Avasant.com