A recent security lapse at DeepSeek, a rising Chinese artificial intelligence firm, has exposed over a million lines of sensitive internal data to the internet. According to a report by cloud security firm Wiz, the breach highlights the urgent need for AI companies to prioritize robust security measures as they rapidly develop and deploy new technologies. This incident serves as a crucial lesson, emphasizing that speed and innovation should not come at the expense of data protection.
Wiz researchers discovered a publicly accessible ClickHouse database linked to DeepSeek's systems. This database, hosted on two DeepSeek subdomains, did not require authentication, granting unrestricted access to internal logs dating back to January 6, 2025. The exposed data included:
The vulnerability was identified during a routine reconnaissance of DeepSeek’s internet-facing assets. Researchers found two non-standard ports (8123 and 9000) that led to the exposed ClickHouse database. From there, Wiz researchers ran arbitrary SQL queries, accessing the sensitive information.
The exposed data could have allowed malicious actors to:
Upon notification from Wiz, DeepSeek secured the database within hours. However, the incident raises concerns about the company's security practices, especially given its rapid growth and the sensitive nature of its AI technologies.
Concerns extend beyond this specific data exposure. Cybersecurity threat intelligence firm Kela reported that DeepSeek’s R1 model, while comparable to OpenAI’s ChatGPT, is "significantly more vulnerable" to jailbreaking. This means it can be more easily manipulated to generate malicious outputs, such as:
Wiz emphasized that the rapid adoption of AI technologies requires companies to prioritize security from the outset.
"The world has never seen a piece of technology adopted at the pace of AI. Many AI companies have rapidly grown into critical infrastructure providers without the security frameworks that typically accompany such widespread adoptions."
The incident underscores the importance of implementing security practices on par with those required for public cloud providers and major infrastructure providers. As AI becomes more deeply integrated into businesses worldwide, the industry must recognize and address the risks associated with handling sensitive data. This includes implementing robust authentication mechanisms, regularly auditing security protocols, and proactively monitoring for vulnerabilities.
This incident serves as a reminder that AI companies must prioritize security to protect sensitive data and maintain user trust. As the AI landscape evolves, a proactive and vigilant approach to cybersecurity is essential for mitigating potential risks and ensuring the responsible development and deployment of these powerful technologies.
Further Reading: