Home » Aerospace and defense » The Open-Source Gambit: Did DeepSeek Ignite an AI Arms Race?
On January 20, 2025, DeepSeek stunned the industry with the release of R1, an open-source, cost-efficient AI model that rivaled OpenAI’s GPT-4 on multiple benchmarks. Much like ChatGPT’s viral debut in November 2022, DeepSeek-R1 quickly became a sensation, hailed by enterprises seeking affordable generative AI solutions and governments eyeing it as a fast-track platform to sovereign LLM.
However, the euphoria was short-lived.
On January 29, 2025, cybersecurity researchers uncovered a massive data breach, exposing over one million sensitive records linked to DeepSeek’s AI platform. Almost simultaneously, leading risk and security firms, including Qualys, Inc. and Encrypt AI, raised alarms over DeepSeek’s toxic content generation, vulnerability to jailbreak attacks, and deeply ingrained biases. What began as an AI revolution spiraled into a security nightmare, forcing global regulators to take action.
DeepSeek’s privacy policy states that all user data is stored in China, where stringent national security laws compel companies to share data with government agencies upon request. This revelation set off alarms across Europe, prompting privacy watchdogs in Ireland, France, Belgium, and the Netherlands to investigate DeepSeek’s data collection practices. Several national governments have already banned the application over security risks.

Despite DeepSeek’s open-source availability on platforms such as Hugging Face, most users interact with it through mobile apps or web versions, exposing them to potential data surveillance risks. Most government agencies are expected to eventually restrict the use of all foreign AI models, whether ChatGPT or DeepSeek, for official use, driving a stronger reliance on sovereign AI models tailored to national security and regulatory requirements. In contrast, enterprises will retain the flexibility to adopt a best-of-breed AI approach, leveraging both homegrown and international models, as long as they comply with local data protection laws and industry regulations across their regions of operations. As governments and enterprises weigh the implications, one thing is certain: the AI battlefield is evolving, and DeepSeek has reshaped the global conversation on open-source innovation.
While governments impose restrictions on DeepSeek’s application layer, its core AI models remain open source, allowing enterprises to host them locally and bypass security concerns. Some have already seized the opportunity—India’s Yotta Data Services recently launched myShakti, the country’s first sovereign B2C generative AI chatbot, built on DeepSeek’s model but operating entirely on Indian servers. Similarly, Elevenlabs, a synthetic voice startup, has integrated DeepSeek-R1 into its products.
Until now, only two major AI players, Meta and Mistral, had made their AI models open source. But DeepSeek’s rise has reshaped the landscape, forcing a strategic rethink. Governments and tech companies worldwide are now accelerating efforts to build indigenous AI models tailored to local languages, cultural contexts, and national datasets, dismantling Silicon Valley’s monopoly over AI.
DeepSeek did not just disrupt the AI industry; it sparked a geopolitical AI arms race. The battle for AI sovereignty has begun. While multiple governments have banned DeepSeek over security concerns, its impact on AI’s trajectory is definite. Just as Sputnik’s launch in 1957 ignited the space race, DeepSeek has triggered a reckoning in AI strategy, forcing the world to reconsider the balance between accessibility, security, and sovereignty.
The emergence of lightweight open-source AI models such as DeepSeek presents a paradigm shift for enterprises. They offer unprecedented cost advantages and customization capabilities while introducing new security, compliance, and operational risks.
Key opportunities
Key risks
By Chandrika Dutt, Research Director
Avasant’s research and other publications are based on information from the best available sources and Avasant’s independent assessment and analysis at the time of publication. Avasant takes no responsibility and assumes no liability for any error/omission or the accuracy of information contained in its research publications. Avasant does not endorse any provider, product or service described in its RadarView™ publications or any other research publications that it makes available to its users, and does not advise users to select only those providers recognized in these publications. Avasant disclaims all warranties, expressed or implied, including any warranties of merchantability or fitness for a particular purpose. None of the graphics, descriptions, research, excerpts, samples or any other content provided in the report(s) or any of its research publications may be reprinted, reproduced, redistributed or used for any external commercial purpose without prior permission from Avasant, LLC. All rights are reserved by Avasant, LLC.
Login to get free content each month and build your personal library at Avasant.com