China's burgeoning artificial intelligence (AI) sector has yielded a new contender: DeepSeek. This AI application has rapidly gained popularity, amassing over a million downloads in its first week. However, experts are warning of potential risks associated with its widespread adoption, particularly in Western democracies. This includes concerns about data privacy, disinformation, and the erosion of democratic principles.
DeepSeek's emergence coincides with China's broader push to become a global leader in AI. The rapid development and deployment of AI technologies like DeepSeek and Qwen 2.5, Alibaba's generative AI model, signal a significant shift in the global AI landscape. Analysts predict that AI adoption will have a massive economic impact, potentially steering a significant portion of the global GDP by 2030.
While economic gains are a clear motivator, some experts believe China's ambitions extend beyond mere market dominance. Deepak Padmanabhan from Queen's University Belfast suggests that DeepSeek's focus lies in "building foundational technology rather than commercial applications," hinting at a strategic, long-term vision.
One of the primary concerns surrounding DeepSeek revolves around data privacy. The app's privacy policy stipulates the collection of user-provided information, including personal details and even keystroke patterns, which can be used for biometric identification. While other AI applications also collect user data , the critical difference lies in where that data is stored and the legal framework governing its protection.
Harin Sellahewa from the University of Buckingham points out that DeepSeek is obligated to comply with Chinese law, which may not align with internationally recognized standards of data protection. This means that user data stored on servers within China may not be subject to the same safeguards as in Western countries.
Lukasz Piwek of the University of Bath echoes these concerns, highlighting China's "relatively flexible regulatory approach to advanced technology." This approach, while enabling rapid innovation, "raises concerns about data privacy, potential misuse, and ethical implications, particularly for an open-source model like DeepSeek."
Beyond data privacy, experts worry about the potential for Chinese AI like DeepSeek to be used for disinformation campaigns and the erosion of democratic principles. Ross Burley, co-founder of the Center for Information Resilience, warns that allowing Chinese AI to flourish in the West could "fundamentally reshape our societies in ways we are not prepared for."
Burley argues that China has a track record of using technology for surveillance, control, and coercion, both domestically and internationally. He cites examples of spyware-laden devices, state-sponsored cyber campaigns, and the misuse of AI to suppress dissent.
Rafael Rodríguez Prieto, Chair of Philosophy of Law at Pablo de Olavide University, shares this apprehension, stating that AI could be used to create comprehensive profiles of individuals, potentially ending up in the hands of governments for control.
The integration of Chinese AI technology into Western society raises complex questions about security, privacy, and ethical considerations. Experts emphasize the need for safeguards, accountability, and a deeper understanding of the potential consequences.
Cecilia Danesi, co-director of the Master's Degree in Ethical Governance of AI at the Pontifical University of Salamanca, calls for greater legal attention to the civil liability of AI, emphasizing the need to protect consumers from potential harm caused by technological advancements.
While some, like Professor Luo Mai of the University of Edinburgh, highlight the opportunities DeepSeek provides for AI research, particularly in resource-constrained environments, the prevailing sentiment is one of caution. The potential risks associated with unchecked AI development, particularly those originating in countries with differing values and legal frameworks, cannot be ignored.
As AI continues to evolve and reshape our world, it is crucial to approach its development and deployment with careful consideration. The case of DeepSeek serves as a stark reminder of the potential dangers, highlighting the need for robust regulations, ethical guidelines, and international cooperation to ensure that AI benefits humanity as a whole.